Identification and enumeration of analytes in complex sample matrices are used in medical, biological, industrial, and environmental applications. Example analytes include particles such as viruses, bacteria, parasites, and specific cell types, typically found in a complex matrix of confounding substances. Sample preparation methods such as filtration, lysis, homogenization and dilution are often required to enable specific particle identification and enumeration in these complex matrices. Particle identification and enumeration are often based on expensive, laboratory-based measurement devices or instrumentation.
A useful example is the identification and enumeration of CD4+ T-helper lymphocytes (CD4 cells) for monitoring and managing conditions in persons with HIV/AIDS. HIV mediated CD4 cell destruction is the central immunologic feature of HIV infection. Thus, the CD4 count is a critical measurement in initial assessment of infection and disease staging, in monitoring antiretroviral therapy and in managing primary and secondary prophylaxis for opportunistic infections. In fact, quantitative T helper cell counts in the range of 0 to 1000 cells per microliter are a critical indicator for initiating and optimizing anti-retroviral treatment and preventing viral drug resistance. Flow cytometry is the current standard-of-care for CD4 cell counting. Unfortunately, flow cytometry is a central lab-based technique; transport, equipment, and operational costs render the technique cost-prohibitive in limited resource settings where HIV prevalence is highest.
In an embodiment, a particle identification system includes: a cartridge for containing a sample with fluorescently labeled particles; illumination for illuminating a region within the cartridge to stimulate emission from fluorescently labeled particles in the region; imager for generating wavelength-filtered electronic images of the emission within at least one measurement field of the region; and particle identifier for processing the electronic images to determine a superset of particles of interest and determining fluorescently labeled particles within the superset based on properties of the fluorescently labeled particles in the at least one measurement field.
In an embodiment, a method determines fluorescently labeled particles within a sample, by: processing at least one electronic image from at least one focal position within the sample; determining dimmest separation lines between brighter areas in the electronic image; and, for each of the brighter areas, determining local background level based on pixel values of the separation lines forming a perimeter therearound, to determine each of the fluorescently labeled particles.
In an embodiment, a system determines fluorescently labeled particles within a sample and includes: means for processing at least one electronic image from at least one focal position within the sample; means for determining dimmest separation lines between brighter areas in the electronic image; and, for each of the brighter areas, means for determining local background level based on pixel values of the separation lines forming a perimeter therearound, to determine each of the fluorescently labeled particles.
A software product comprising instructions, stored on computer-readable media, wherein the instructions, when executed by a computer, perform steps determining fluorescently labeled particles within a sample, the instructions comprising: instructions for processing at least one electronic image from at least one focal position within the sample; instructions for determining dimmest separation lines between brighter areas in the electronic image; and, for each of the brighter areas, instructions for determining local background level based on pixel values of the separation lines forming a perimeter therearound, to determine each of the fluorescently labeled particles.
In an embodiment, a cartridge is provided for detecting target analytes in a sample. The cartridge includes an inlet port and fluidic channel with a detection region, and a dried reagent coating, disposed in the cartridge, for rehydrating into the sample upon input through the inlet port for the detection region.
In an embodiment, a method for determining fluorescently labeled particles within a sample in presence of sample movement includes (a) determining spatial shift between sequentially captured first and second images of the sample by using a third image of the sample, wherein the spatial shift is at least partially induced by the sample movement, and (b) spatially correlating events between the first and second images, while accounting for the spatial shift.
In an embodiment, a method for providing a fluidic assay cartridge with dried reagents includes depositing a plurality of mutually incompatible liquid reagents in a respective plurality of mutually separated areas of the fluidic assay cartridge, and drying the plurality of mutually incompatible liquid reagents to form the dried reagents.
In an embodiment, a fluidic assay cartridge with dried reagents includes a plurality of mutually separated dried reagents, located within the fluidic assay cartridge, wherein the plurality of mutually separated dried reagents have a respective plurality of mutually different compositions.
The present disclosure may be understood by reference to the following detailed description taken in conjunction with the drawings briefly described below. It is noted that, for purposes of illustrative clarity, certain elements in the drawings may not be drawn to scale. Specific instances of an item may be referred to by use of a numeral in parentheses (e.g., 16(1)) while numerals without parentheses refer to any such item (e.g., 16).
The present disclosure is divided into the following main sections for clarity: System Level Overviews; Particle Counting Methods and Software; Fluidic Features and Methods; Cartridge Features and Methods; and Combinations of Features.
The methods described here may be collectively referred to as “static cytometry” using an inventive implementation of an optical system and sample chamber. The term “cytometry” technically refers to the counting or enumeration of cells, particularly blood cells. The term “cytometry” is used generically in this disclosure to refer to the enumeration of any of a number of analytes, particularly particle analytes, described in more detail below. The term “static” implies that the disclosed system and methods do not require that target analytes (for example, cells or particles) move or flow at the time of identification and enumeration. This in contrast to “flow cytometry,” a technical method in which target analytes (e.g., cells or particles) are identified and/or enumerated as they move past a detector or sets of detectors. Examples of static cytometry include hemocytometers such as the Petroff-Hauser counting chamber, which is used with a conventional light microscope to enumerate cells in a sample. Cell staining apparatus and fluorescence microscopy instrumentation can be used to perform fluorescence-based static cytometry. The present disclosure provides methods, devices, and instruments for performing static cytometry analysis on a sample.
The methods and systems described herein generally relate to assays that use fluorescence signals to identify and/or enumerate analyte(s) present in a sample. In exemplary applications, target analytes are specifically labeled with fluorophore-conjugated molecules such as an antibody or antibodies (immunostaining). Other molecular recognition elements may be used, including but not limited to aptamers, affibodies, nucleic acids, molecular recognition elements, or biomimetic constructs. Non-specific fluorophores may also be used, including but not limited to stains such as propidium iodide, membrane specific fluorophores, and fluorescent nuclear stains. Generally speaking, electronic images are formed of the fluorescence signals, wherein labeled analytes generate local maxima in the electronic images. Image processing is later utilized to identify the maxima and determine their correspondence to analytes or particles of interest.
In exemplary embodiments, excitable tags are used as detection reagents in assay protocols. Exemplary tags include, but are not limited to, fluorescent organic dyes such as fluorescein, rhodamine, and commercial derivatives such as Alexa dyes (Life Technologies) and DyLight products; fluorescent proteins such as R-phycoerythrin and commercial analogs such as SureLight P3; luminescent lanthanide chelates; luminescent semiconductor nanoparticles (e.g., quantum dots); phosphorescent materials, and microparticles (e.g., latex beads) that incorporate these excitable tags. For the purpose of this disclosure, the term “fluorophore” is used generically to describe all of the excitable tags listed here. The terms “fluorophore-labeled,” “fluor-labeled,” “dye-labeled,” “dye-conjugated,” “tagged,” and “fluorescently tagged” may be used interchangeably in this disclosure.
The terms “color” and “color images” in this disclosure are intended as follows. “Color” may refer to a specific wavelength or wavelength band. However, “color images” are intended as meaning grayscale images formed while a sample is illuminated under a specific color. Thus, “two color images” is to be interpreted as two grayscale images formed under illumination by different colors at separate times. Similarly, a “color channel” refers to operation of a system herein during illumination with a specific color. For example, “electronic images recorded in different color channels” is to be interpreted as electronic images formed under illumination by different colors.
The embodiments described herein may be applicable to assays beyond fluorescence-based signal transduction. For example, the methods and systems may also be compatible with luminescence, phosphorescence, and light scattering based signal transduction.
In one embodiment, two color fluorescence microscopy based on laser illumination and differential immunostaining are used to identify and enumerate analytes in a sample. The present disclosure provides a method and system for performing this analysis. In one example, differential immunostaining with anti-CD4 and anti-CD14 antibodies may be used to identify CD4 T helper lymphocytes in blood. In another example, differential immunostaining with anti-CD4 and anti-CD3 antibodies are used to identify CD4 T helper lymphocytes in blood. In another example, differential immunostaining with anti-CD4, anti-CD3, and anti-CD45 (three color system) are used to identify CD4 T helper cell percentage (% CD4) in a blood sample. In still another example, differential immunostaining with anti-CD4, anti-CD3, and anti-CD8 antibodies is used to identify and enumerate both CD4 and CD8 T lymphocytes such that the CD4/CD8 T lymphocyte ratio is obtained in addition to the CD4 T helper lymphocyte count.
The terms “T cells” and “T lymphocytes” may be used interchangeably in this disclosure. The terms “T helper cells,” “CD4 T helper cells” and “CD4 T cells” may be used interchangeably in this disclosure to refer to those T helper cells that express CD4 on their surface.
For purposes of this disclosure, a cell that binds to a labeling molecule with substantial affinity may be termed “positive” for that particular labeling molecule. Conversely, a cell that does not bind to a labeling molecule with substantial affinity may be termed “negative” for that particular labeling molecule. For instance, a cell that binds an anti-CD4 antibody with a fluorescence tag and shows up as a detectable fluorescence event when illuminated may be termed “CD4 positive.” Conversely, a cell that does not show up as a detectable fluorescence event after incubation with an anti-CD4 antibody with a fluorescence tag under the same or similar conditions may be termed “CD4 negative.”
Plural or singular forms of a noun may be used interchangeably unless otherwise specified in the disclosure.
System 100 works by sequentially illuminating a stained sample within cartridge 130 to cause fluorescence of particles within the sample, capturing images of the fluorescence, and analyzing the images to identify particles in the sample and to determine the presence of biological markers therein. The illumination is by electromagnetic radiation which is typically visible light, but radiation of other types (e.g., infrared, ultraviolet) may also be utilized by adapting the modalities described herein. The systems and methods described herein provide a user interface and robust clinical capabilities by identifying and counting analytes in even unfiltered whole blood samples, although they can also work with lysed, diluted, and/or filtered samples. Details of system 100, cartridge 130 and associated methods and software to do so are now provided. It should be clear that cartridge 130 is compatible with system 100, but it is appreciated that cartridge 130 may also be usable in other readers. Likewise, other cartridges could be usable in system 100.
As shown in
Illumination beams 240 and 310 may advantageously be arranged such that their incidence on, and reflections from, cartridge 130 are at angles that fall outside a numerical aperture of imaging optics 140. This helps improve signal to noise ratio of images captured by sensor 160. Although
Phase plate 245, through which beam 240 passes, has a characteristic feature size and a rotation rate to decohere laser light such that laser speckle effects or other interference-induced illumination nonuniformity in beam 240 are averaged out over the duration of a measurement. As shown, system 100 includes phase plate 245 only in the path of beam 240, but it is contemplated beam 310 could pass through an identical or similar phase plate if similar effects are expected in beam 310.
Emission filter 150 may be a dual-bandpass filter with one bandpass set to transmit at least a portion of the fluorescence emission produced when illuminating with illumination beam 240, and the other bandpass set to transmit at least a portion of the fluorescence emission produced when illuminating with illumination beam 310, while blocking light at the wavelengths of illumination beams 240 and 310. If more illuminations are added, the number of bandpasses in emission filter 150 may be increased (e.g., three lasers and a triple-bandpass filter). In one embodiment, multiple single-bandpass emission filters are placed in a filter-changing mechanism that is motorized and controlled by a control system. In yet another embodiment, fluorophores are chosen to share an emission wavelength range and have significantly different excitation spectra, such that they are selectively excited by individual illumination beams but detected using a single emission filter having a single bandpass. In a further embodiment, fluorophores are chosen to share an excitation wavelength range and have significantly different emission spectra, such that all fluorophores are excited by the same illumination while a filter-changing mechanism with multiple single-bandpass filters is used to selectively detect emission from different fluorophores.
In alternative embodiments, imaging optics 140 do not include aperture stop 145, but may instead create an image of rays 360 that exceeds a size of sensor 160 at a focal plane of the optics, in which case the size of sensor 160 laterally defines width 370 and measurement field 135. In another embodiment, imaging optics 140 may include a field stop (not shown in
Several aspects of cartridge 130 are advantageously arranged to improve sensitivity of system 100 to particles bearing biological markers. In one embodiment, cartridge 130 is fabricated of an optical grade, clear material to enable distortion free and loss free imaging of the sample therethrough. The material may be a low autofluorescence plastic such as cyclic olefin polymer, cyclic olefin copolymer, polystyrene, polymethylmethacrylate, polycarbonate, etc. to avoid generating stray background light, from which fluorescence of sample particles would have to be distinguished. A precisely known height of a fluidic channel within cartridge 130, including each of the MFs to be measured, may be a critical dimension. If a field of view of optics 140 determines a two-dimensional area of a measurement field of the sample being measured, the channel height times the area will determine the volume, such that knowing the height precisely limits the measurement accuracy of particle concentration by volume. Filling of the channel from floor to ceiling (e.g., in the dimension parallel to the optical axis of the imaging system) can be achieved through an appropriate combination of channel height and surface energy. The surface energy can be increased by, e.g., plasma cleaning and/or chemical surface modification. Cartridge 130 may be configured with an advantageously small channel height to aid filling. A small channel height dimension further reduces the absorption of excitation illumination and fluorescence emission by sample components such as red blood cells.
A wide viewing angle in optics 140 limits a depth of field of the imaging system formed by imaging optics 140 and sensor 160. It is advantageous to count all particles within a MF in a single image, rather than acquiring several images at varying focus depths within a sample and sorting unique from non-unique particles within the images. For this reason, it may be desirable to match the depth of field of the imaging system to the channel height. That is, the imager having a depth of field that is commensurate with channel height may be regarded as the channel height being within ±20% of the depth of field of the imager along a viewing axis of the imager. Alternatively, if depth occupied by fluorescently labeled particles within the cartridge is known, it may be desirable to match the depth of field of an imager to such depth. That is, the imager having a depth of field that is commensurate with depth occupied by fluorescently labeled particles within the cartridge may be regarded as that depth being within ±20% of the depth of field of the imager along a viewing axis of the imager. It may also be desirable to make portions of cartridge 130 adjacent to the MF much thicker than the depth of field (so that stray material such as dust and fingerprints outside the sample chamber is substantially out of focus, minimizing the chances that such material will distort images or be mistaken for a target analyte). This is illustrated further in
When a measurement of analyte concentration within a volume is based on a number of analytes detected within a two-dimensional projection of the volume, the accuracy of the measurement is limited by the accuracy to which the third dimension, eliminated in the projection, is known. This situation is encountered, for example, when the number of analytes in a volume is determined by two-dimensional imaging of the volume, like the situation presented in
The average analyte concentration n in a volume V is given by n=NN, where N is the number of analytes within the volume. The volume V is the local volume height, hlocal, integrated over the projected area, A, included in the measurement. This integration reduces to the area-weighted average of the local height, haverage. With these definitions, the volume V can be written as V=A×haverage. Consequently, the analyte concentration is given by n=N/(A x haverage). This equation underlines the importance of an accurate determination of the volume height. Actual knowledge of the local volume height, hlocal, is not required. It is sufficient to determine the haverage, i.e., the area-weighted average of the local height.
Channel height 385 (
In one embodiment, measurement of analytes may be performed together with a measurement of haverage for each measurement field. In another embodiment, channel height 385 may be mapped out and recorded in advance of the analyte measurement and applied in the calculation of the deduced analyte concentration. For instance, a channel height measurement may be performed during production of cartridge 130. In an embodiment, a characterization of channel height 385, in the form of, e.g., a single haverage or a map consisting of a series of haverage values, may be encoded on cartridge 130 and read either by an operator or by an instrument. For instance, a barcode or other machine-readable information that contains channel height information may be labeled on a cartridge 130, and the barcode may be read by a barcode reader at the time of analyte measurement. The barcode reader may be integrated in the instrument performing the analyte measurement (e.g., system 100), it may be connected to the instrument, or it may be separate from the instrument.
A channel height characterization for individual cartridge 130 may be integrated in the cartridge production process. The characterization may be performed on all cartridges or it may be performed on a subset of devices, for instance a suitable number of cartridge 130 may be extracted from each production run or each lot of cartridges provided to a customer. Techniques for characterizing channel height include but are not limited to white light interferometry in transmission or reflection mode. Ideally, for preservation of materials, the measurement is non-destructive. That is, the cartridges exposed to the measurement are still usable for analyte concentration measurements. Optical interrogation methods are ideal for this purpose as long as the relevant surfaces of the cartridges can be accessed optically. In the case of analyte detection systems based on imaging or other optical detection schemes, an optical path through the cartridge that is used by the detection system can be used for characterizing the channel height. Other access paths, if available, may also be used.
Cartridge handling system 120′ accepts cartridge 130′ from an operator that loads cartridge 130′ into a slot (not shown) in a front panel of enclosure 110′. Thereafter, cartridge handling system 120′ moves cartridge 130′ into place for imaging by sensor 160′ through imaging optics 140′, including repositioning cartridge 130′ for imaging of specific measurement fields therein. As opposed to the arrangement of particle identification system 100,
Illumination subassembly 205 utilizes a dichroic beam-combiner to combine two illumination beams (e.g., of different wavelength bands, for stimulating different fluorescent labels) prior to the beams being directed along beam path 215 toward cartridge 135′. This allows for complete assembly and alignment of illumination subassembly 205 before installation into system 110′, as well as minimizing a number of optical paths into the cartridge area.
Controller 450 includes a processor 460 that is typically a microprocessor or microcontroller, but could be implemented in other known ways (e.g., with discrete logic, ASIC or FPGA semiconductors, or other electronic hardware with equivalent functionality). Controller 450 also includes memory 470 for storing software, filter kernels, images, calculations and results thereof.
Step 510 obtains a whole blood sample from a patient. In embodiments, a cartridge (e.g., cartridge 13 or 130,
Step 550 loads the cartridge into a reader (e.g., systems 100, 100′,
Step 580 analyzes at least the first, and optionally the second image(s) to count fluorescent particles in each image. Step 580 may include execution of the software instructions illustrated in
Although not shown in
After step 575 or 580, method 500 optionally reverts to step 560 so that the cartridge moves to another measurement field to be counted, and steps 570 through 575 (and optionally step 580) are repeated. In an alternative embodiment, steps 560 and 570 may be performed for all fields of view prior to steps 560 and 575 being performed for all fields of view. If multiple fields of view are measured, when all such fields of view have been measured, an optional step 590 generates statistics from the particle counts generated in step 590.
One exemplary feature of main routine 800 is that care is taken to establish precise focus of imaging optics 140 on measurement fields of cartridges 130, 130′ for particle measurement by generating focus metrics related to the actual particles of a given sample, rather than by focusing on artifacts in the sample or on the cartridge. Therefore certain image processing steps will be initially discussed in relation to their support of autofocus routines, but as seen later the same steps will also be utilized for image processing for the particle counting. It should also be noted that various routines called by main routine 800 first identify “blobs” within images of the sample, then apply screens to the blobs to distinguish those blobs that likely represent particles of interest from those that do not. In this context, “blobs” are areas of local brightness within an image. The screens disclosed herein are described in order to enable one of ordinary skill in the related art to make and/or use particle identification systems, but not every screen mentioned is critical; certain of the screens may be performed in an order different from that specified here, or omitted, while other screens may be added. Generally speaking, the routines disclosed herein identify blobs or other events within at least one image of a measurement field that can be considered a superset of particles or events of interest, and determine fluorescently labeled particles or other events within the superset based on properties of the particles or events in the measurement field.
Step 805 of main routine 800 receives a cartridge into a system. An example of step 805 is systems 100, 100′ receiving cartridges 130, 130′,
Step 840 of main routine 800 enables an illumination module, acquires an electronic image S, and disables the illumination module. A first example of step 840 is turning on illumination module 200 of systems 100, 100′, acquiring an image S of a measurement field within cartridges 130, 130′ from sensor 160 while illumination module 200 is on, then turning illumination module 200 off. Step 845 processes image S to identify and perform preliminary filtering on “blobs” identified within image S. As used herein, “blobs” are local areas of high intensity pixels within an electronic image. Such areas may or may not correspond to particles to be counted, many of the steps described in connection with
Step 850 makes a decision according to the number of illumination modules to be utilized for counting particles. If another image S and its associated processing are required, main routine 800 returns to step 840 to acquire another image S (and optionally process the image S in step 845). Accordingly, another example of step 840 is turning on illumination module 300, acquiring an image S while illumination module 300 is on, and turning illumination module 300 off. If images S associated with all appropriate illumination modules have been acquired, main routine 800 advances from step 850 to step 860.
Step 860 correlates images S that have been acquired using different illumination sources. An example of step 860 is performing correlatesourceimages 2100,
Step 880 makes a decision according to whether further measurement fields are to be measured. If so, main routine 800 returns to step 820. If not, main routine 800 proceeds to step 890.
Step 890 filters, based on intensity correlations out of the data taken in previous steps, if necessary, based on the data itself. In embodiments herein, it may be advantageous to combine data for multiple measurement fields before step 890 is performed, so that the data is statistically well behaved. However, in embodiments wherein the number of events found per measurement field is high, step 890 could be performed on data from a single measurement field, or on separate data sets from separate measurement fields before merging the data. This could be advantageous for cases where particle brightness changes significantly from one measurement field to the next, for example due to illumination intensity drift or fluorescence staining variation from one part of a sample to another.
An example of step 890 is performing filterlowintensitycorrelations 2300,
Step 920 of autofocus 900 runs a calculateautofocusposition routine. An example of step 920 is running calculateautofocusposition 1000, described below in connection with
Steps 930 through 960 of autofocus 900 are optional. If performed, steps 930 through 960 provide measurements and calculate a function that provides optimal focus positions for multiple measurement fields on a cartridge. Step 930 moves the cartridge to a last measurement field. An example of step 930 is controller 450, 450′ of systems 100, 100′ (
It should be understood that more measurement fields may be measured by adapting step 930 to move a cartridge to such measurement fields rather than a last measurement field, and that step 940 may be repeated. Doing so can provide information that allows optional step 960 to calculate functions for optimal focus that may be more accurate for intermediate fields than the linear ramp function discussed above.
Calculateautofocusposition 1000 requires no data input but begins when a measurement field of a cartridge is in position for imaging within a reader. Step 1010 of calculateautofocusposition 1000 moves a focus adjustment to a first end of a range of focus adjustments. An example of step 1010 is controller 450 controlling focus mechanism 147 to move imaging optics 140 of systems 100, 100′ to one end of its focus range. Step 1020 enables an illumination module to illuminate the measurement field. Step 1030 records an image S of the measurement field. Examples of steps 1020 and 1030 are controller 450 turning on illumination module 200 or 300 and recording an image S generated by sensor 160,
Step 1070 of calculateautofocusposition 1000 disables the illumination module that was enabled in step 1020. At this point, calculateautofocusposition 1000 has at least gathered an image S at a plurality of focus positions; if steps 1040 corresponding to each image S have not been performed, they are now performed before proceeding to step 1080. Step 1080 fits a Gaussian distribution to the autofocus metrics returned from each instance of step 1040, with respect to the focus adjustment value associated with each such instance. Step 1085 calculates the optimal focus setting (for the illumination module enabled in step 1020) as the center focus setting with respect to the Gaussian distribution. In an alternative embodiment, steps 1080 and 1085 are replaced by a step in which the optimal focus position is set to be the recorded position with the optimal calculated autofocus metric.
An optional step 1090 of calculateautofocusposition 1000 calculates optimal focus for an alternate illumination module, or for particle counting, by adding an offset to the optimal focus setting calculated in step 1085. The offset added in step 1090 may for example correct for chromatic aberration expected in optics (e.g., imaging optics 140) due to a wavelength change between two illumination modules. Also, as a practical matter, the offset added in step 1090 may correct for other effects. Such effects may include, for example, mechanical hysteresis or backlash in a focusing mechanism depending on the direction of movement of such mechanism. The offset may also be empirically derived between the optimal focus setting calculated in step 1085, and a focus setting that works ideally for particle counting purposes. For example, data may be obtained during calibration of systems 100, 100′ that can be utilized to empirically derive such an offset. Step 1095 of calculateautofocusposition 1000 thus returns at least the optimal focus setting calculated in step 1085, and may also return other optimal focus settings as calculated in step 1090.
Step 1105 of calculateautofocusmetric 1100 receives an image S. Pixels of image S have values according to the light intensity received by a sensor at the corresponding location within the image. An example of step 1105 is receiving image S from calculateautofocusposition 1000 (e.g., when step 1040 of calculateautofocusposition 1000 initiates calculateautofocusmetric 1100, as discussed above, it passes image S to calculateautofocusmetric 1100). Step 1110 creates a processed pseudoimage F from S by utilizing S as input for a flatfieldmask subroutine. An example of step 1110 is creating pseudoimage F from S by performing flatfieldmask 1200,
At this point, it is noted that when this document discusses images and pixels thereof, the standard convention will be followed in which an upper case variable will be utilized for the image as a whole (e.g., S), and lower case variables will be utilized for pixels thereof (e.g., sx,y or s(x,y)). Also, certain techniques and parameters that are described in terms of pixels herein are appreciated as sensitive to distance in object space that a single pixel spans in image space. In this document, the term “image scale” is sometimes used as a reference to a distance in object space that maps to the size of one pixel in a detected image thereof. For example, if a system has an image scale of 2 μm/pixel, an object with a physical length of 10 μm will span 5 pixels in an image thereof.
Step 1120 calculates metric M by summing the square of each pixel of S that is associated with a pixel of F whose value is 1. That is, when f(x,y)=1, the corresponding s(x,y) is squared and added to the summation. This has the effect of increasing M when more pixels of S are identified as belonging to particles to be counted (as determined by flatfieldmask 1200, as discussed below). It also increases M when the pixels that are counted are bright (the corresponding values of s(x,y) are large) thereby favoring particles in focus. Step 1125 returns M for use by calculateautofocusposition 1000.
Step 1205 of flatfieldmask 1200 receives input image S and kernel K. Step 1210 creates a pseudoimage F by convolving S with K. Filter kernel K is now discussed before completing the explanation of flatfieldmask 1200.
Exemplary kernel K shown in
Reverting to
Step 1305 of processsourceimage 1300 receives image S as input. Step 1310 calls a subroutine prefilter(S) that removes line noise, large scale features and electronic noise offsets in image S. An example of step 1310 is calling subroutine prefilter 1400, described below. Step 1320 calls a subroutine removebackground(S) that generates a pseudoimage SBG that subtracts local background both from background regions and neighboring regions. An example of step 1320 is calling subroutine removebackground 1500, described below. Step 1330 calls a subroutine declump(S, SBG) that identifies connected regions within pseudoimage SBG that may contain multiple particles to be counted, and splits the connected regions for further processing. An example of step 1330 is calling subroutine declump 1700, described below. Step 1340 generates a binary mask M by calling the flatfieldmask(S) subroutine described previously. An example of step 1340 is calling flatfieldmask 1200. Step 1350 calls a subroutine morphologicalintersection(SBG,M) that modifies pseudoimage SBG by filtering connected regions of SBG where binary mask M is 0 within an entire region. An example of step 1350 is calling subroutine morphologicalintersection 1800, described below. Step 1360 generates a blob list B by using a subroutine findblobs(SBG) that identifies connected regions of bright pixels within pseudoimage SBG. Blob list B labels the pixels to identify which regions they belong to. An example of step 1360 is calling subroutine findblobs 1900, described below. Step 1370 calls a subroutine filterblobs(B) that modifies blob list B by calculating various statistical moments on the blobs therein, and removing blobs that do not fit criteria for a desired particle count. An example of step 1370 is calling subroutine filterblobs 2000, described below. Step 1375 returns blob list B for further processing.
Step 1405 of prefilter 1400 receives image S as input. Step 1410 generates a temporary pseudoimage F as a fast Fourier transform of image S, the fast Fourier transform (and its inverse) being known in the art. Step 1420 performs a high pass filtering operation on pseudoimage F by removing, in the frequency domain, low frequency content that corresponds to features larger than 100 pixels in the spatial domain. This removes image content that is too large to be considered as a particle for counting; it is expedient to do this operation in the frequency domain because of the difficulty in assessing large objects in the spatial domain against a background of small objects. Also, it is understood that the present method desires to count particles on the order of 10 μm in size with an image scale of 2 μm/pixel; the low frequency content removed in step 1420 would be adjusted accordingly to screen out unreasonably large image content if smaller or larger particles were to be counted. Step 1430 sets frequency components along ky=0 to zero except at kx=0; that is, any DC component that exists at ky=kx=0 is maintained. Step 1430 therefore advantageously suppresses line-patterned dark noise that is often introduced by CMOS image sensors. Step 1440 creates a new version of image S by performing an inverse fast Fourier transform on F as modified by steps 1420 and 1430. Step 1450 determines the minimum value within S and subtracts this value from each pixel in S. Step 1455 of prefilter 1400 returns the modified image S.
The purpose of removebackground 1500 is to calculate the local background in the area of each particle to be counted. The principle of the routine is to determine the dimmest separation lines between areas of local brightness, and then define the local background for each area of local brightness as the maximum pixel value on the separation lines forming a perimeter therearound. The dimmest separation lines between areas of local brightness are equivalent to inverted watershed lines. A global image processing method is utilized to determine a maximum value of the separation line's perimeter around each area of local brightness. This method flood fills a pseudoimage of the areas of local brightness up to the maximum value for the separation line perimeter around areas of local brightness. Alternatively, each perimeter contour may be traced out individually. Because local maxima may be introduced by noise, removebackground 1500 blurs a temporary copy of the image to suppress such maxima for watershed line identification purposes. Also, in determining the local background, it is desirable to treat clumped cells as a single object (thus removebackground 1500 is performed before declump 1700, described below).
Step 1505 of removebackground 1500 receives image S as input. Step 1510 creates a pseudoimage BL by applying a six-pixel radius Gaussian blur to image S. The radius of the Gaussian blur is chosen as 6 pixels because the present method desires to count particles on the order of 10 μm in size in images with a scale of 2 microns per pixel; it is understood that the Gaussian blur radius should be modified when particles that are significantly smaller or larger are to be counted or if a different image scale applies.
Also, it should be understood that in this case and in other cases herein, a Gaussian blur of radius r pixels is applied by convolving an image with a filter kernel containing values representative of a 2-dimensional Gaussian distribution of radius r pixels. For computational ease, the kernel may be truncated to consist only of the pixels that have significant values, e.g., a kernel of [r pixels]×[r pixels] may be used in the case of a Gaussian width of r pixels. Furthermore, the kernel may be normalized such that it integrates to 1, such that the effect of applying the blur is only to smooth the image, rather than scale it by increasing or decreasing the net intensity of its pixels.
In step 1510, the purpose of the Gaussian blur is to avoid erroneous watershed lines through the interior of a particle of interest due to short-scale pixel intensity variation within the perimeter of the particle. Such intensity variation can arise from, e.g., camera noise, light scattering artifacts, biological properties of the particle of interest, and the presence of other sample components within the same region of the image. The radius of the Gaussian blur is set to approximately match the size of the particle of interest, and will thus change with the physical size of the particle of interest, and with image scale. This covers characteristic scales for short-scale interior intensity variation. If only certain known characteristic scales are present, the radius of the blur applied in step 1510 can be adjusted accordingly to be a closer match to the greater of the scales present. In cases where interior intensity variation of particles of interest is already smooth, step 1510 can be eliminated altogether. Empirical optimization may be utilized to set the radius of the blur applied in step 1510.
Step 1520 creates a binary image W by calling a subroutine calculatewatershedlines(BL), described below as calculatewatershedlines 1600. Step 1530 creates a still further pseudoimage M that depends on the values of S and the value of binary image W for a corresponding pixel therein. In step 1530, for each pixel coordinate (x, y), mx,y is set to the corresponding value sx,y when wx,y=1, otherwise mx,y is set to 0 (that is, mx,y=0 when wx,y=0).
Step 1540 of removebackground 1500 creates a background image BG by calling a subroutine morphologicalreconstruction(S,M), described below as morphologicalreconstruction 1650. Step 1550 creates an output image SBG by taking the maximum value of (S-BG) and 0 for each pixel location in S and BG, that is, all negative values are converted to 0. Step 1555 returns SBG.
Step 1605 of calculatewatershedlines 1600 receives an image BL as input (e.g., image BL as generated at step 1510 of removebackground 1500, described above, or image BL as generated at step 1710 of declump 1700, described further below). Step 1610 generates a pseudoimage W from image BL by calculating a watershed as described in “Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations” [Vincent (1991)]. In Vincent, an input image is segmented into watersheds, or catchment basins, surrounding local maxima and labeled by the catchment basin that each pixel belongs to. A “watershed” label is sometimes inserted into the image to separate the catchment basins (by the description above, it may be seen that the separation lines are more analogous to separations between watersheds, than watersheds themselves). Step 1610 always separates adjacent basins by applying the watershed label (that is, Vincent's routine is modified to always apply the label, rather than applying it only sometimes). Step 1620 modifies pseudoimage W to create a binary image by converting all of the watershed labels to pixel values of 1, and all other pixels to 0. Step 1625 returns binary image W.
Step 1655 of morphologicalreconstruction 1650 receives images S and M as input (e.g., image S as generated by sensor 160 or as prefiltered by prefilter 1400, and M as generated at step 1530 of removebackground 1500, described above). Step 1660 returns a grayscale morphological reconstruction as described in “Morphological Grayscale Reconstruction in Image Analysis: Applications and Efficient Algorithms” [Vincent (1993)]. Step 1665 returns modified image S.
Step 1705 of declump 1700 receives images S and SBG as input (e.g., image S as generated by sensor 160 or as prefiltered by prefilter 1400, and SBG as generated at step 1320 of processsourceimage 1300, described above). Step 1710 generates a pseudoimage BL by applying a Gaussian blur to image S. The radius of the blur applied in step 1710 is typically two to three pixels, and is set to suppress noise within a single particle in order to avoid splitting the particle into multiple particles, and without introducing any possibility of blurring out a watershed line between two particles. That is, this blurring step suppresses short-scale intensity variation within the perimeter of a particle of interest. Causes for such intensity variation have been discussed above in connection with
Step 1720 generates a binary image W by passing BL to subroutine calculatewatershedlines 1600, discussed above. Step 1730 modifies SBG by leaving each pixel sBGx,y undisturbed except for pixels where W indicates a watershed line, in which case the corresponding pixel SBGx,y is set to 0. Step 1755 returns modified image SBG.
Step 1805 of morphologicalreconstruction 1650 receives images S and M as input (e.g., image SBG as generated at steps 1320 and 1330 of processsourceimage 1300, and marker file M as generated at step 1340 of processsourceimage 1300, as described above). Step 1810 creates a temporary binary image SM wherein for each pixel coordinate x,y, sM x,y is set to a value of 1 where sx,y has a value of at least 1, otherwise SM x,y is set to a value of 0. Step 1820 calls morphologicalreconstruction 1650 to calculate a grayscale morphological reconstruction of image SM utilizing marker file M. Step 1830 modifies input file S by setting each pixel sx,y to 0 where SM x,y already has a value of zero. Step 1835 returns modified image S.
Step 1905 of findblobs 1900 receives image S as input (e.g., image SBG as generated at step 1350 of processsourceimage 1300, as described above). Step 1910 generates blob list B utilizing a blob extraction such as is known in the art and is generally called connected-component labeling.
Connected-component labeling consists of identifying connected regions of foreground pixels. In the present embodiment, a foreground pixel in image SBG is a pixel of value 0 while pixels of value 1, i.e. watershed lines, are background pixels. The connected-component labeling method serves to assign a unique label to each region of connected foreground pixels, i.e., blobs. In the present embodiment, connected-component labeling has been implemented as follows. A label counter and an empty queue are initialized, and a row-major scan is performed on image SBG. If an unlabeled foreground pixel is encountered, the label value is incremented and the pixel is added to the queue. This operation initiates a subroutine that serves to identify all pixels belonging to a connected region. In the subroutine, the first pixel in the queue is assigned the current label value. This pixel is then removed from the queue, and all its unlabeled foreground neighbor pixels are added to the queue (“neighbor pixels” herein are the 8 pixels closest to the pixel of interest, known from graph theory as 8-connectivity). This repeats until the queue is empty, at which point the current label value has been assigned to all pixels belonging to this connected region, and the process exits the subroutine. The scan continues to search for the next unlabeled foreground pixel, which will lead to the identification of another connected region. The scan ends when all pixels in SBG have been scanned.
Connected-component labeling can thus be described by the following pseudocode in Fortran-style:
FOR each foreground pixel in image SBG (following row-major scan)
After step 1910 is complete, step 1915 returns blob list B.
Generally speaking, filterblobs 2000 applies moment-of-inertia type statistical measures to filter out blobs that do not behave as the particles intended to be counted. A number of the specific values used as screens may be set by considering the size of particles intended to be counted, and by analyzing images of samples and adjusting the values to include particles and exclude artifacts appropriately. The specific embodiment shown in
Step 2005 of filterblobs 2000 receives blob list B as input (e.g., blob list B as generated at step 1910 of findblobs 1900, as described above). Step 2007 initiates a loop that increments through each blob in blob list B. Step 2010 counts the pixels in the blob. This filtering step is to remove artifact blobs that arise from “hot” (unduly sensitive) sensor pixels. Step 2012 determines whether the number of pixels is equal to 1 (and because of the rationale underlying steps 2012 and 2014, the value of 1 is appropriate for any pixelated system and will not scale with image scale or particle size). If so, step 2014 removes the blob from B and filterblobs 2000 advances to step 2050. If not, filterblobs 2000 advances to step 2020.
Step 2020 calculates a best fit ellipse of pixel values in the blob; that is, step 2020 calculates major and minor axes a and b of the best fit ellipse. It should be noted that a and b are not limited to integer values, as the blob may be small and/or oriented at an angle with respect to horizontal and vertical axes of the imager. The intent of this screen is to remove blobs caused by residual hot pixels (e.g., hot pixels combined with other background effects), clumped hot pixels, and very small events caused by background effects.
Step 2022 determines whether the area defined by 4ab is less than a minimum area. For a system with image scale of 2 μm/pixel, the minimum area may be about 2 pixels. Unless clumping of hot pixels is the only source of small, false events, the minimum area scales with the image scale. This value depends on the density of hot pixels, as a high density of hot pixels would increase the probability of clumping of multiple hot pixels, in which case the cut would likely have to be increased beyond 2 pixels. The minimum area also depends on the size of the particles of interest as well as the size and relative frequency of smaller, false events. The size histograms for particles of interest and small, false events may or may not overlap. In either case, the cut should be placed to average a net zero error in the count of particles of interest. If the particles of interest are significantly larger than about 5 pixels, the minimum area can be increased to improve the rejection of background artifacts, including smaller particles not of interest. If any short-scale background features are present in addition to hot pixels, the performance will likely be degraded if the cut should be reduced, in which case step 2022 could be removed. The minimum area also depends on the typical size scale of background features. If the typical size scale of background features is closer to the size of the particles of interest and the relative frequency of such background features is significant, it may be difficult to achieve satisfactory performance. In that case, it may be advantageous or necessary to improve the image resolution by, for instance, decreasing the image scale or utilizing a higher-performance imaging system.
If the ellipse area 4ab is less than the minimum area, step 2024 removes the blob from B and filterblobs 2000 advances to step 2050. If not, filterblobs 2000 advances to step 2026.
Step 2026 determines whether the area defined by 4ab from step 2020 is greater than a maximum area that may be, for example, 100 pixels. This screen is set up to conservatively remove events that are much larger than particles of interest, and may be increased to about 100 since other area filters applied in steps 2036 and steps 2044, discussed below, also serve to remove events larger than the particles of interest. The purpose here is to make the best cut in a histogram where a true population and a false population may exist. In the present case, the false events are larger than the true events. The maximum area may therefore scale with the image scale and the size of the particles of interest. In systems where the occurrence of large, false events is relatively rare, no significant performance changes may be expected by varying the maximum area over a wide range.
If the ellipse area 4ab is greater than the maximum area, step 2024 removes the blob from B and filterblobs 2000 advances to step 2050. If not, filterblobs 2000 advances to step 2030.
The ellipse fit performed in step 2020 can be biased by long “tails” associated with certain blobs. The area limits in decision steps 2022 and 2026 above are accordingly loose so that valid particles are not filtered out. A further filtering step compensates for this by utilizing a similar technique based on the 4th power of pixel intensities. Step 2030 calculates a best fit ellipse of the 4th power of pixel values in the blob; that is, step 2020 calculates major and minor axes a and b of the best fit ellipse formed by the 4th power of the pixel values. The eccentricity of this ellipse is defined as sqrt(1-(b/a)2). Blobs in images may have outlying regions of lower intensity caused by image or imaging artifacts. For instance, local background variation at or very close to a particle may not be distinguished from the actual particle. Hence, a blob may include an intensity contribution from local background in addition to the intensity contribution from the particle. Particle movement during at least a portion of the image exposure, caused for instance by general sample motion, may produce an additional lower intensity contribution to the blob. Such an effect may also be caused by mechanical motion of the cartridge or of one or more imaging system components. Likewise, aberrations in the imaging system can produce, e.g., uniform blur, directional tails of lower intensity, and halos, all of which may be included in a blob. When determining the shape and size of a particle, it is advantageous to reduce or eliminate the contribution from artificial outlying regions of lower intensity. This can be achieved, for instance, by raising the pixel intensities to a greater power, which reduces the weight of lower intensity pixels. In an embodiment, the pixels values are raised to the 4th power. For other systems with different image or imaging properties, a different power may be optimal. If the images are free of artificial, outlying regions of lower intensity, raw pixel values may be used. When CD4+ T-helper cells are the particles of interest, the eccentricity based screen removes events that are clearly too eccentric to originate from an approximately circular particle (e.g., a CD4+ T-helper cell).
Step 2032 removes events that are clearly too eccentric to originate from an approximately circular particle. The applicability of the calculated eccentricity is highly dependent on resolution of the imaging system. In an embodiment where a particle of interest has a diameter of only about 5 pixels, the eccentricity limit has to be relatively loose, such as 0.8. In a system with improved resolution relative to the particle size, the eccentricity limit can be made tighter (lower). The eccentricity limit depends on the types of artifacts present in the image. The optimal eccentricity limit is the value that, on average, leads to a net zero error in particle count. In an embodiment, a cut value in the range 0.75-0.85 has been found to be optimal.
Therefore, in an embodiment, step 2032 determines whether the eccentricity of the ellipse exceeds 0.8. If so, step 2034 removes the blob from B and filterblobs 2000 advances to step 2050. If not, filterblobs 2000 advances to step 2036. The eccentricity based screen is dependent on resolution of the imaging system utilized (e.g., sensor 160's rendition of an image that is magnified by optics 140). In an embodiment wherein particles to be counted have a diameter of only about 5 pixels, a cutoff value used for an eccentricity screen must be loose (e.g., a range of 0.75 to 0.9) wherein if resolution of an imaging system was such that a typical particle to be counted had a larger diameter, a tighter (lower) limit could be utilized.
Step 2036 determines whether the area defined by 16ab is greater than a size limit. Step 2036 removes events that are clearly too large to be a particle of interest, but because more refined screen of particle size is performed following this step (steps 2040 to 2046, discussed below) the size limit is set conservatively loose. The screen implemented in step 2036 does, however, improve the quality of the input data to, and therefore the performance of, the procedure that follows in steps 2040 to 2046. In an embodiment, a size limit of approximately 50 has been found to work well. Due to the presence of a more refined size selection procedure following this step, the size limit value is not critical. The value of the size limit scales with the image scale and the size of the particles of interest.
Therefore, in an embodiment, step 2036 determines whether the area defined by 16ab is greater than 50 pixels. If so, step 2034 removes the blob from B and filterblobs 2000 advances to step 2050. If not, filterblobs 2000 advances to step 2040.
An entropy based threshold can be utilized to remove residual background associated with each blob such that legitimate particles will still be counted but artifacts can be screened out. The intent of the following steps is to create the best estimate of particle size and to craft limits around the size to account for natural variation of the particles, noise, resolution effects, and optical blurring.
Step 2040 first calculates an entropy based threshold utilizing the “Kapur, Sahoo, and Wong Method” described in the paper, “A Survey of Thresholding Techniques” by P. K. Sahoo, S. Soltani and A. K. C. Wong, published in Computer Vision, Graphics, and Image Processing 41, at page 237. However, instead of applying the entropy based threshold globally as in this paper, the threshold is applied locally on an individual blob basis. Generally speaking, this method defines the probabilities of original gray level distributions as pi where i is a particular grayscale value out of 1 possible levels in a grayscale range G (e.g., an integer within the range of 0 to 1-1) and a variable pi as
for a given threshold candidate t. Further variables Hb(t) and Hw(t) are calculated as
Finally, an optimal threshold t* is calculated as the gray level that maximizes Hb(t)+Hw(t), that is,
t*=ArgMax{Hb(t)+Hw(t)} for t εG.
Step 2040 also calculates a number called TSID as the threshold-subtracted integrated density of all pixels in the blob being processed, adds the TSID to the datastructure of the corresponding blob in blob list B, and calculates the thresholded area of the blob.
Step 2042 determines whether the thresholded area is less than 11 pixels. If so, step 2046 removes the blob from blob list B and filterblobs 2000 advances to step 2050. If not, filterblobs 2000 advances to step 2044. Step 2044 determines whether the thresholded area is greater than 100 pixels. If so, step 2046 removes the blob from blob list B and filterblobs 2000 advances to step 2050. If not, filterblobs 2000 advances to step 2050 without removing the blob. Both the lower area limit used in step 2042 and the upper limit used in step 2044 depend on particle size, natural variation, noise, resolution effects, and optical blurring. That is, the best estimate of the actual particle area is provided by step 2040. The previous filtering on particle size has improved the data that is input to other steps in the process, or has reduced processing time by removing events that clearly are not associated with particles of interest. The lower area limit used in step 2042 and the upper limit used in step 2044 represent the size range of the particles of interest with an additional tolerance to account for imperfections due to, e.g., noise, limited resolution, and blur. The lower area limit used in step 2042 and the upper limit used in step 2044 scale with the image scale and the size of the particles of interest.
Step 2050 determines whether all blobs have been processed through the filters of filterblobs 2000 discussed above. If not, filterblobs returns to step 2005 to process the next blob in B. If all blobs have been processed, filterblobs continues to step 2060.
Step 2060 sets a variable IQR to the inter-quartile range of all TSIDs of blobs in blob list B, and a variable Q3 to the third quartile value of all TSIDs of blobs in blob list B. Step 2070 sets a scaling factor SCALE to 7, and a bright object threshold TB to Q3+IQR*SCALE. SCALE is an empirically determined parameter that may lie within the range of about 3 to 8. TB is approximately where the top value of the TSID distribution would have been, based on the bulk of the blob population, except for abnormally bright objects such as inclusions skewing the top end of the distribution. Thus, the loop defined by steps 2080 through 2094 filters a histogram of blob brightness. Images may contain multiple different classes of particles, each characterized by a typical blob brightness range. If the ranges are distinct or only partially overlap, it may be possible to separate the individual populations by making simple cuts in the histogram. In cases of overlap, TB may be set to minimize the number of blobs assigned to the wrong population. For example, in an embodiment the histogram contains the primary population, containing the particles of interest, and a class of extremely bright outliers. The overlap is statistically insignificant and TB can be placed using a simple inter-quartile approach. In this embodiment, the value of TB can be in the range from 4 to 8 and especially 7. In other embodiments with statistically significant overlap between populations, a narrower range may be required. Also, in some cases, more refined methods such as peak fitting may be applied to correctly assign blobs to individual populations.
Step 2080 initiates a loop that covers each blob in B. The next blob in B is considered in step 2090. A decision step 2092 determines whether TSID of the current blob exceeds TB. If so, step 2094 removes the blob from B. If not, and/or after step 2094, a decision step 2096 determines whether further blobs remain in B to be processed. If so, filterblobs 2000 returns to step 2090 for the next blob. If not, step 2098 returns the modified blob list B.
Step 2105 of correlatesourceimages 2100 receives blob lists BA, BB as input. For example, each of blob lists BA, BB may be blob lists as generated from a measurement field within a cartridge 130 or 130′ imaged to sensor 160 and processed using the processsourceimage 1300 method, as described above, with BA and BB being blob lists from the same measurement field utilizing different illumination modules 200, 300.
Step 2110 initializes a loop spanning each blob in BA; the remaining steps of correlatesourceimages 2100 determine whether there is a match in BB for each such blob, and if a match is found, whether it is the best available match. Step 2120 determines a position of the next blob ba(1) to be considered in BA, and defines a blob bb(1) as the first blob in BB. Step 2130 initializes a loop spanning each blob bb in BB. Step 2140 determines a position of the next blob bb to be considered in BB, calculates a variable DISTANCE1 between the position of blobs ba(1) and bb, and calculates a variable DISTANCE2 between the position of blobs ba(1) and bb(1). Step 2142 is a decision step that determines whether DISTANCE2 is greater than DISTANCE1. If so, step 2144 sets blob bb(1) as the current blob bb. If not, or after step 2144, step 2146 determines whether all blobs in BB have been processed, and returns to step 2140 until all blobs BB have been processed. In this manner, steps 2130 through 2146 find at least the best spatially matched blob bb(1) for the current blob ba being processed, and identifies the distance DISTANCE2 between bb(1) and ba.
Step 2152 is a decision step that determines whether DISTANCE2 is greater than 16 μm. The choice of 16 μm as the maximum for DISTANCE2 reflects an expected maximum spatial registration tolerance between images from which blob lists BA, BB were generated and may vary in embodiments within a range of 12 to 20 microns. This allows for registration shifts between the location of a particle as imaged under different illumination sources. Such shifts can be caused by, e.g., chromatic aberration, mechanical shifts between or during exposures, or particle movement within a cartridge. In an embodiment, the choice of 16 μm as the maximum for DISTANCE2 limits such shifts to a magnitude where it is possible to generate an initial set of correlated blobs imaged under different illumination sources with satisfactory reliability using a simple correlation distance. The value of DISTANCE2 should be set large enough to encompass the registration shifts characteristic of the system, which may be approximately twice the size the particle of interest. This allows for the inclusion of some false correlations where the blobs originate from different particles. The optimal value of DISTANCE2 may also depend on parameters including particle size, magnitude of registration shifts, and particle density. A more refined analysis of the distance between the blobs in a correlated pair, discussed in connection with
In an embodiment, If DISTANCE2 is greater than 16 μm, blobs ba(1) and bb are at least an initial match, and correlatesourceimages 2100 advances to step 2170 for further processing. If DISTANCE2 is less than 16 μm, blobs ba(1) and bb are not a match, and correlatesourceimages 2100 advances to step 2160. Step 2160 is a decision step that determines whether all blobs in BA have been processed. If no, correlatesourceimages 2100 returns to step 2120 to try to find matches for another blob ba(1). If yes, correlatesourceimages 2100 terminates at step 2165, returning blob list BC (defined below) as a list of blobs that correlate across blob lists BA and BB.
Step 2170 is reached only when there is a preliminary, acceptable match between blobs ba(1) and bb. At this point, further processing is done to determine whether the preliminary match is the best match, or whether there are better matches in BA for blob bb than blob ba(1). Step 2170 identifies blob ba as blob ba(1). Step 2172 initializes a loop spanning all blobs in BA; in this loop the blob being processed is identified as ba(2). Step 2175 determines a position of the next blob ba(2), and calculates a variable DISTANCE3 between the position of blobs ba(2) and bb. Step 2180 is a decision step that determines whether DISTANCE3 is less than DISTANCE2 (which was established as the distance between blobs ba(1) and bb in step 2140). If DISTANCE3 is not less than DISTANCE2, blob ba(2) is not a better match for blob bb than blob ba(1), so correlatesourceimages 2100 advances to step 2190. But if DISTANCE3 is less than DISTANCE2, blob ba(2) is a better match for blob bb than blob ba(1). In this case, correlatesourceimages 2100 advances to step 2185, wherein blob ba(2) is identified as the (current) “best match” of blob bb, by setting blob ba as blob ba(2) and setting DISTANCE2 as DISTANCE3. Thus, further blobs ba will not only have to be closer to blob bb than the first blob ba(1) to be considered the best match for bb, they will have to be closer to bb than ba(2).
After step 2185, correlatesourceimages 2100 advances to step 2190, another decision step that determines whether all blobs in BA have now been processed against blob bb. If not, correlatesourceimages 2100 returns to step 2175 to try to find a better match for blob bb. If so, correlatesourceimages 2100 advances to step 2192.
Step 2192 is a decision step that determines whether blob ba remains the same blob ba(1) that was found to be an initial match for blob bb(1) in steps 2130 through 2152. If not, correlatesourceimages 2100 reverts to step 2160 without adding anything to the correlated blob list (because the better match ba will eventually be found as the outer loop beginning at step 2110 advances to the appropriate ba). If ba remains the same blob ba(1), correlatesourceimages 2100 advances to step 2194, where blobs ba(1) and bb(1) are added to blob list BC with an indication that they are correlated blobs. From step 2194, correlatesourceimages 2100 advances to step 2160, discussed above, to finish looping through candidate blobs from BA and BB.
The correlation method discussed in relation to
Events from more than two blob lists may be correlated by applying correlatesourceimages 2100 as described above, to pairs of blob lists. In an embodiment, blob lists A, B, and C are correlated. First, blob lists A and B are correlated using correlatesourceimages 2100. This generates a blob list AB(A,B) containing correlated events. In this discussion, the notation I(I,J) means a blob list representing blobs found only in blob list I based on correlation of input blobs lists I and J, and IJ(I,J) means a blob list representing blobs correlating between blob lists I and J based on correlation of input blob lists I and J. Remaining uncorrelated events from lists A and B are placed in blob lists A(A,B) and B(A,B). Next, each event in correlated blob list AB(A,B) is assigned an image location, e.g., pixel coordinates, as the average image location of the two correlated blobs from lists A and B respectively. Blob list C is now processed three times by correlatesourceimages 2100 to correlate it with blob lists A(A,B), B(A,B), and AB(A,B). Correlation of blob lists C and A(A,B) leads to the generation of blob lists AC(A,B,C), A(A,B,C), and C(A,C), where blob list AC(A,B,C) contains blobs that spatially correlate across A and C but not B, blob list A(A,B,C) contains blobs from A that did not spatially correlate with blobs from B or C, and blob list C(A,C) contains blobs from C that did not spatially correlate with blobs from A. Likewise, correlation of blob lists C and B(A,B) leads to the generation of blob lists BC(A,B,C), B(A,B,C), and C(B,C). Correlation of C with AB(A,B) leads to the generation of blob lists ABC(A,B,C), AB(A,B,C), and C(AB,C), where blob list ABC(A,B,C) contains blobs that spatially correlated in A, B, and C, blob list AB(A,B,C) contains blobs that spatially correlated in A and B but not C, and blob list C(AB,C) contains blobs from C that did not spatially correlate with AB. Finally, the events from C(A,C), C(B,C), and C(AB,C) are combined into a single blob list C(A,B,C) containing all the blobs in C that did not spatially correlate with blobs in A or B. The final output, representing all two-way and three-way correlations and remaining uncorrelated blobs, consists of blob lists ABC(A,B,C), AB(A,B,C), AC(A,B,C), BC(A,B,C), A(A,B,C), B(A,B,C), and C(A,B,C). Throughout, image locations for correlated blobs are set to the average of the individual image locations as defined in the original blob lists A, B, and C.
If a fourth blob, D, list is present, as would be generated in a four-color system, blob list D could then be correlated with blob lists ABC(A,B,C), AB(A,B,C), AC(A,B,C), BC(A,B,C), A(A,B,C), B(A,B,C), and C(A,B,C) generated above, also using correlatesourceimages 2100.
Step 2205 of compensatechromaticaberration 2200 receives blob list B as input. For example, blob list B may be a correlated blob list generated by the correlatesourceimages 2100 method, as described above, wherein B contains information of matched blobs ba, bb. Step 2210 sets up a loop spanning all correlated blobs (ba, bb) in B. Step 2220 calculates registration shifts Δx, Δy for the next blob (ba, bb), adds Δx, Δy to a temporary set of Δx, Δy and adds Δx, Δy to the information associated with blob (ba, bb) in B. Step 2230 is a decision step that determines whether all blobs (ba, bb) in B have been processed; if not, compensatechromaticaberration 2200 returns to step 2220 to process another blob (ba, bb), and if so, compensatechromaticaberration 2200 advances to step 2240. Step 2240 calculates a (Q3-Q1 Δx INTERVAL) and a (Q3-Q1 Δy INTERVAL) from the set of Δx, Δy established by step 2220, and sets up a temporary blob list B′ that is initially equal to blob list B.
Step 2250 sets up a loop that is performed for each blob in B′. Steps 2252 and 2256 are decision steps that determine whether Δx or Δy for a given blob exceeds 1.5 times the respective calculated (Q3-Q1 Δx INTERVAL) and (Q3-Q1 Δy INTERVAL). These steps remove false correlations and/or outliers from B′ such that the fit performed in step 2260 (
If the answer to either of the decisions in steps 2252 and/or 2256 is yes, compensatechromaticaberration 2200 advances to step 2254, which removes blob (ba, bb) from B′. After step 2254, or if steps 2252 and 2256 are answered no, compensatechromaticaberration 2200 advances to step 2258, another decision step that determines whether all blobs (ba, bb) in B′ have been processed. If not, compensatechromaticaberration 2200 returns to step 2250 to process another blob (ba, bb) in B′, and if so, compensatechromaticaberration 2200 advances to step 2260.
Step 2260 generates linear fit functions FX(x) and FY(y) from the information associated with each blob (ba, bb) in B′ by correlating Δx to x position and Δy to y position, respectively. This enables screening of blobs (ba, bb) on the basis of a fit the normal shift of a blob in both dimensions based on its position; this is useful because spatial shift effects may depend on initial position of a blob within a measurement field.
Having set up linear functions FX(x) and FY(y) based on the blobs with the most typical registration shifts as discussed in connection with steps 2240 through 2260, compensatechromaticaberration 2200 discards temporary blob list B′ and utilizes FX(x) and FY(y) for further screening of blob list B. Step 2270 sets up a loop that spans all correlated blobs (ba, bb) in B. For a given blob (ba, bb), step 2280 calculates FX(x) and FY(y) from the x, y position of each blob, an associated dx=Δx−FX(x) and dy=Δy−FY(y), and a two-dimensional residual displacement 2D_RESIDUAL_DISPL=sqrt (dx2+dy2). Step 2282 is a decision step that determines whether 2D_RESIDUAL_DISPL is greater than 8 μm. As in similar screening values discussed above, the value of 8 μm used in step 2282 depends on the size of the particles being counted, the possibility of random movement and on the expected maximum spatial registration tolerance between images from which blob lists BA, BB were generated. The screening value of 8 μm could vary in other embodiments within a range of about 6 μm to 10 μm. If step 2282 determines that 2D_RESIDUAL_DISPL is greater than 8 μm, blob (ba, bb) is removed from B in step 2284. This is another screen based on shifts between the location of blobs imaged with different illuminators associated with the same particle. 2D_RESIDUAL_DISPL is set to be smaller than the diameter of the particle of interest, while allowing for some degree of random particle movement. In an embodiment where the particle of interest has a diameter of approximately 10 μm, a useful value for 2D_RESIDUAL_DISPL may be 8 μm. An optimal value of 2D_RESIDUAL_DISPL may for example be based on analysis of a tradeoff between missing true correlations due to registration errors, and including false correlations in cases with a high particle density or high likelihood of particles being clumped together. The optimal value of 2D_RESIDUAL_DISPL therefore depends on the particle size.
If step 2282 determines that 2D_RESIDUAL_DISPL is less than 8 μm, or after blob (ba, bb) is removed from B, compensatechromaticaberration 2200 advances to step 2286.
Step 2286 is a decision step that determines whether all correlated blobs (ba, bb) in B have been processed. If not, compensatechromaticaberration 2200 returns to step 2280 to process another blob (ba, bb), and if so, compensatechromaticaberration 2200 returns modified blob list B in step 2290.
Step 2305 of filterlowintensitycorrelations 2300 receives initial blob list BA and correlated blob list B as input. Step 2310 performs a Gaussian distribution fit,
to TSID values associated with blobs in BA (see, e.g., the explanation of step 2040,
Step 2320 is a decision point that determines whether
is greater than a parameter threshold, which may be set within a range of about 1.5 to 2.0, and is typically 1.7. If not, filterlowintensitycorrelations 2300 advances to step 2360, described below. If so, filterlowintensitycorrelations 2300 advances to step 2330.
Step 2330 calculates a parabolic fit of the usual form ax2+bx+c to intensity data of blob list BA in the range [maxx[yall(x)]: 3*maxx[yall(x)]]. Therefore, the range wherein the data is fitted starts at the peak of the intensity distribution from blob list BA, and extends to three times the peak value. The choice of 3 as the multiplier that defines the top end of the range is set to clearly exceed the extent of dim, false event populations in the event that this is the tallest peak in the histogram; in embodiments, this multiplier might vary within the range of about 2 to 5. A decision step 2340 determines whether the parabolic coefficient a is greater than zero. If so, the range [maxx[yall(x)]: 3*maxx[yall(x)]] fits a parabola that is upward facing, and the vertex of the parabola indicates a demarcation between two distinct distributions. That is, the parabolic fit serves to locate the “valley” between two, possibly overlapping, populations in the histogram, if two populations exist. The fit range is set to extend across the valley. Therefore if a>0, filterlowintensitycorrelations 2300 advances to step 2350 which removes correlated blobs from B wherein TSID<-b/2a (the parabola vertex). If a<0, or after step 2350, filterlowintensitycorrelations 2300 advances to step 2355 and returns blob list B.
If filterlowintensitycorrelations 2300 reaches step 2360 as a result of step 2320, further screening is attempted. Step 2360 calculates median[ycorr(x)] of TSIDs of each blob in correlated blob list B. A decision step 2370 determines whether x0+2*w (from the Gaussian fit determined in step 2310) is greater than median[ycorr(x)] from step 2360. If so, the correlated blob distribution does not include a significant population in addition to the population captured by the Gaussian distribution fit, and the population is considered well behaved. Therefore if x0+2*w>median[ycorr(x)], filterlowintensitycorrelations 2300 advances to step 2394 without further filtering. The factor 2 used as a multiplier for w may vary, in embodiments, between values of about 1.5 and 3.
If step 2370 determines that x0+2*w≦median[ycorr(x)], the correlated blob distribution includes a population with higher values than predicted by the Gaussian distribution, and a chance remains that the distribution includes a peak of false events. In this case, filterlowintensitycorrelations 2300 advances to step 2380, which again calculates a parabolic fit of the form ax2+bx+c, this time to data within the range of [x0: 3*x0].
A decision step 2340 determines whether the parabolic coefficient a is greater than zero. If so, the range [x0: 3*x0] fits a parabola that is upward facing, and the vertex of the parabola indicates a demarcation between two distinct distributions. Therefore if a>0, filterlowintensitycorrelations 2300 advances to step 2392 which removes correlated blobs from B wherein TSID<-b/2a (the parabola vertex). If a<0, or after step 2392, filterlowintensitycorrelations 2300 advances to step 2394 and returns blob list B.
The brightness of the particles of interest, e.g., CD4+ T-helper cells, may vary significantly due to both biological variation and measurement related effects. In certain embodiments, the camera sensor (e.g., sensor 160) has a wide dynamic range, for example 16 bits, to accommodate this brightness variation. Alternatively, for a system utilizing a camera sensor with a smaller dynamic range, for example 8 bits, it may not be possible to find a single exposure time for which all particles are above the detection limit without some particles reaching saturation. Saturation may affect the apparent properties of a particle of interest in such a way that it is falsely rejected by the particle identification process (e.g., the methods and subroutines called therein, as described in
Therefore, in an embodiment, the dynamic range of an 8-bit camera sensor is extended by acquiring multiple images at different exposure times, where the dimmest particles are properly recorded at the longest exposure time and the brightest particles are properly recorded at the shortest exposure time. For example, step 840, described in connection with
In another embodiment, the dynamic range of an 8-bit camera sensor is extended by, in step 840, acquiring multiple images at a constant exposure time set such that no particles of interest are saturated. Prior to performing step 860 in
In this section, methods and devices for reliably performing passive continuous flow in a fluidic channel are described. One method and device: (1) utilizes gravity to provide driving pressure; (2) starts and stops liquid flow in a controlled manner; and (3) delivers known quantities of liquid into the channel. Certain embodiments described herein further provide continuous flow of a known liquid volume through a channel, with flow terminating before the channel is completely drained of liquid. As disclosed herein, this effect may achieved by the following process, beginning with filling an inlet port with a known volume. Pressure-driven flow due to gravity and surface tension moves the liquid through a channel to an outlet port. Introduction of a wicking pad located near the outlet port absorbs the liquid and ensures that flow continues until all the liquid in the inlet port has entered the channel. Proper separation of the wicking pad from the outlet port, design of outlet port geometry, and control of solid-liquid-gas surface tension ensures that flow terminates before the channel is drained of liquid. The wicking pad further prevents backflow of liquid through the outlet port into the channel.
The term “surface tension” is used herein in relation to the surface energies of the solid-liquid, liquid-gas, and solid-gas interfaces associated with the fluidic cartridge. Surface tension or surface energy impacts the ability of a liquid to wet a solid surface, characterized by a liquid-solid-gas interface. In the present disclosure, exemplary solids include plastics and plastics with modified surface properties. Exemplary liquids include aqueous solutions, including aqueous solutions with surface tensions modified by surface active components such as surfactants or amphiphilic molecules. An exemplary gas is air.
For applications, such as in-vitro diagnostics, a liquid 2420 (such as an aqueous solution) may be introduced into channel 2412 at inlet port 2414 of fluidic cartridge 2400, as shown in
Depending on outlet port 2416 geometry (e.g., diameter and shape) and surface tension associated with the liquid, solid cartridge material, and gas (typically air), outlet port 2416 acts as a capillary valve with a characteristic burst pressure. Referring to
Once surface tension at outlet port 2416 is overcome by the pressure exerted by liquid 2420 at outlet 2416, liquid 2420 begins to flow out of outlet port 2416, as shown in
In one embodiment, a tilt may be introduced to the fluidic cartridge so as to alter the pressure differential between the inlet port and the outlet port. As shown in
Regardless of the specific configuration used (e.g., level cartridge 2400 or tilted cartridge 2440), a fluidic column builds up at outlet port 2416 such that at some point liquid flow stops when the pressure at the outlet port balances the pressure at the inlet port. This condition does not always guarantee that all of the liquid in the inlet port 2414 flows through channel 2412 to outlet port 2416. One way to maintain liquid flow through channel 2412 is to introduce a wicking pad, which essentially acts as a reservoir for absorbing liquid therein. As will be explained below, the wicking pad acts to reduce the column height of the outlet port such that liquid flow is maintained.
As shown in
Certain embodiments require that the liquid remain in the channel at all times during liquid flow and after the inlet has emptied. For instance, an in-vitro diagnostic may require the biological sample in the liquid to incubate in the channel for a period of time so as to allow the sample to chemically react with reagents that are immobilized to the channel surface. Capillary pressures obtained by wicking pad 2452 can be large enough to pull liquid 2460 from channel 2412 in an unrestrained or uncontrollable manner, causing channel 2412 to go dry or be filled with detrimental gas bubbles. Liquid flow from outlet port 2416 to wicking pad 2452 is affected by a number of factors: absorbance properties of wicking pad 2452 (determined by material composition), geometrical placement of wicking pad 2452 with respect to outlet 2416, the physical geometry of cartridge features like outlet and inlet ports 2414 and 2416, and the surface energies of cartridge materials and liquids (determined by material composition, surface treatments, and time-dependent surface adsorption). One or more of these properties can be optimized for desired performance. For instance, surface energies around outlet port 2416 can be modified by plasma treatment to promote wetting of the solid material by liquid 2460.
In an embodiment, a small gap 2464 is introduced between wicking pad 2452 and outlet port 2416 to prevent draining of channel 2412 (see
An embodiment also employs the use of ridge or rail features at the outlet port to directionally steer the liquid to the wicking pad. Surface tension forces associated with the sharp corners of the rail preferentially direct the liquid along the rail towards the wicking pad in a more controlled manner.
This section is divided into the following subsections: Cartridge and Lid Visual and Tactile Features; Uniform Dried Reagent Placement in Inlet Port; Exemplary Performance of Uniform Dried Reagent Placement in Inlet Port; Demonstrations of Uniform and Nonuniform Staining in Dried Reagent Cartridges; Sample Hold and Release Cartridge; Exemplary Performance of Sample Hold and Release Cartridge; and Cartridge/Instrument Control Features.
When lid 2660 is in the open position shown in
Inner region 2625 of inlet port 2620 connects with a fluidic channel that forms a detection region 2700 that extends for a distance down the length of the cartridge, providing multiple fields of view for imaging thereof. Downstream of detection region 2700, the fluidic channel connects with a vent 2690. Vent 2690 has a small channel cross section such that the expansion at the outlet of vent 2690 forms a capillary gate, thereby stopping flow of the fluid sample. Alternatively, vent 2690 may be configured with a cross section much larger than that of fluid channel such that the expansion at the inlet to vent 2690 will result in a capillary gate.
In its closed position, lid 2660 may function to reduce evaporation from cartridge 2600, which may result in an extension of the time allowed to pass between sample loading and readout of cartridge 2600 using, e.g., systems 100, 100′ shown in
Lid 2730 of cartridge 2600′ (shown in
A cartridge embodiment is now described that utilizes engineered dried reagent methods to deliver accurate analyte detection directly from a small volume liquid sample, e.g., a volume of about 10 microliters. Examples of analytes include particle analytes such as CD4+ T-helper cells, other cell types, bacteria, viruses, fungi, protozoa, and plant cells, and non-particle analytes such as proteins, peptides, prions, antibodies, micro RNAs, nucleic acids, and sugars.
Planar plastic substrate 2810 and plastic upper housing component 2820 are formed of cyclic olefin polymer (COP), although other plastics (e.g., polystyrene) have been successfully used in the same configuration. Planar plastic substrate 2810 has approximate dimensions of 1 mm×20 mm×75 mm. Cartridge 2830 features a “bulls-eye” inlet port 2835 that has an outer region 2840 adjoining an inner region 2845 that may be D-shaped, as shown. Inner region 2845 may also be shaped differently from the D-shape shown, in particular an O-shape has been successfully demonstrated. Inner region 2845 connects with a fluidic channel 2850, leading to a vent opening 2860. A detection region 2870 forms part of fluidic channel 2850, as shown. In the embodiment shown in
Part or all of inner surfaces 2812, 2822 of planar plastic substrate 2810 and plastic upper housing component 2820 respectively may be treated with an argon/oxygen plasma to render these surfaces hydrophilic. A hydrophilic surface promotes uniform capillary driven flow in the final assembled cartridge 2830. Experiments have shown that immediately following plasma treatment, a water contact angle of surfaces 2812, 2822 is less than 10 degrees; relaxation in dry air results in a stable contact angle of approximately 30 degrees.
In the embodiment shown in
This example provides a demonstration of using engineered dried reagent methods to deliver a useful, single-step assay cartridge that delivers an accurate CD4 T-cell count directly from a small volume of whole blood, e.g., a volume of 10 microliters. A liquid reagent formulation containing 1% sucrose, 0.2% PEG8000, 1% bovine serum albumin (mass/volume %'s), phycoerythrin-labeled anti-CD3 monoclonal antibody (0.4 μg/mL), Alexa647-labeled anti-CD4 monoclonal antibody (0.4 μg/mL), and 25 mM HEPES buffer was prepared. A robotic non-contact micro-dispenser equipped with a pressure driven solenoid valve print head (Biojet, Bio-Dot, Inc.) was used to deposit the liquid reagent formulation into an array of droplets 2847 in a pre-determined pattern on substrates 2810. Individual droplets 2847 were 25 nanoliters in volume, positioned with a center-to-center spacing of 0.5 millimeters in a 62-spot pattern that approximated the D-shaped inlet opening 2845. The micro-dispenser included a temperature and humidity-controlled enclosure. Deposition of the 62-spot pattern was performed at 21 to 24° C. and 65% relative humidity. The print pattern is conceptually shown in
Each assembled cartridge 2830′ was placed on a flat surface and approximately 10 microliters of whole blood was added via transfer pipet to inlet port 2840 containing dried reagent coating 2848. This step initiated rehydration of dried reagent 2848. When the blood contacted the entrance to fluidic channel 2850, it was drawn in by capillary forces. Blood-filled cartridges 2830′ were allowed to incubate on a bench top at ambient temperature (˜21° C.) for 20 minutes. Absolute CD4+ T cell counts were generated using a reader instrument as described above (e.g., system 100′,
In this subsection, descriptions and schematics related to the use of dried reagents integrated in a cartridge are provided. Specifically, schematic representations of uniform and non-uniform sample staining by rehydrated dried reagents are described. A dried reagent coating should be positioned to yield spatially uniform reagent-sample interactions within a detection region of a cartridge. Reagent-sample interactions include for example rehydration of a dried reagent and staining of the sample. Because fluid flow is generally laminar in the cartridges herein, mixing in a width direction of fluidic channels of these cartridges is minimal (e.g., primarily diffusion). Thus, when a dried reagent layer is nonuniform, the resulting sample staining may be non-uniform across the channel width. This phenomenon may be observed by visual analysis of fluorescence images in the detection region. Uniform staining is important because particle counts (e.g., CD4+ T-helper cell counts) may be affected by staining; that is, when a particle identification system (e.g., systems 100, 100′) counts particles and in particular counts particles in multiple measurement fields, the statistical validity of the counts and variation among the counts per measurement field will be adversely affected if staining is nonuniform.
When dried reagent deposition is spatially uniform and rehydration rates are been properly designed into a lyophilization formulation, a liquid sample will stain uniformly throughout a detection region. Uniform staining was observed, for example, in the example discussed above. Fluorescence images in the detection region were analyzed and uniform staining was confirmed by visual analysis of sets of digital images, and count accuracy, illustrated in
In this subsection, cartridge features and methods of their use for selectively holding and releasing fluid flow in a cartridge are disclosed. Such features are useful because they facilitate control of incubation time of a sample within a cartridge, for example to control rehydration of a reagent and/or exposure of the sample to a reagent. That is, holding a liquid sample in the inlet port may be advantageous in certain applications in which a reagent dissolution step is required. The hold time can be selected for optimum dissolution/rehydration. One way to provide such control is to provide a frangible surface connected with a fluidic path such that before the surface is broken, air trapped in the fluidic path stops the advancement of fluid, but after the surface is broken, the air may escape such that capillary forces can draw the fluid towards the broken surface. Upon addition of a liquid sample, the liquid “seals” the slot-shaped entrance to the fluidic channel and there is no path for the air in the channel to escape. As a result, the sample sits in the inlet port without substantively entering the fluidic channel.
When lid 3060 is in the open position shown in
The hydrophilic surfaces of inlet port 3020 generate a capillary force that enables acquisition of a sample by simply inverting cartridge 3000 and placing inlet port 3020 onto a blood droplet on an upturned finger. Alternatively, the blood droplet can contact cartridge inlet port 3020 while the cartridge is on a surface such as a table top. Alternatively, the sample can transferred into port 3020 using a transfer devices such as a transfer pipette or other dedicated device (e.g., DIFF-SAFE® blood tube adapter). If trapped air within fluidic channel 3100 did not stop the sample, the hydrophilic surfaces and small surface geometry of fluidic channel 3100 will continue to draw the sample through fluidic channel 3100. However, in the lid position shown in
After a sample is loaded into inlet port 3020 while cartridge 3000 has lid 3060 in the first or open position shown in
Cartridges 2990 with integrated dried reagents were prepared and assembled as described in connection with
In many applications, it is desirable to incorporate system features that ensure proper operation of an assay protocol, cartridge, and instrumentation. In this subsection, various embodiments associated with system quality controls are described.
The embodiments described here are based on a cartridge (e.g., one of cartridges 130, 130′, 2500, 2600, 2600′) that incorporates one or more fluidic channels that serve as sample chambers with detection regions. By incorporating inlet and outlet ports, the fluidic channels facilitate performing sequential fluidic assay steps. In many instances, it is desirable to know if a fluidic channel has sufficient liquid volume for performing a particular assay operation.
Detection of a liquid in a certain region, such as the detection region in a cartridge, may rely on for example, electrical or optical methods. Presence of a fluid may be detected optically by relying on properties of the fluid that differ from properties of a material replaced by the fluid, such as air or another liquid or fluid. If the fluid is more or less absorptive at least at a certain wavelength, its presence may be detected by an absorption measurement. If the fluid contains fluorescent material, its presence may be detected by performing a fluorescence measurement. Thus, sample addition, rehydration of dried reagents and proper staining of the sample by the reagents may all be considered control features for evaluating assay validity. These features are viewable by an imager within one or more measurement fields of a cartridge.
In an embodiment, a fluorescence measurement is performed to read out the results of an assay such as a fluorescent immunoassay or a fluorescent immunostaining assay. The assay itself may involve incubating a sample with fluorescent material prior to the sample entering a detection region of a cartridge. Presence of the sample may be detected by detecting the fluorescent material utilizing the same fluorescence measurement system that is used for the assay. This method has the benefit that it will detect the presence of a required assay reagent and may be configured to determine a value indicative of the amount of fluorescent material present in the detection region. This value may be used for calibration purposes. In cases where it is possible for the fluorescent material to populate the region without actual sample addition, this method may be utilized exclusively for detecting presence, and optionally, an amount of fluorescent material present.
It is also possible to deduce sample presence from detecting properties of an assay requiring the presence of both sample and assay reagents. In an embodiment, a cartridge is used to measure certain analytes. Successful detection of at least some of these analytes may be used as a measure of sample presence, as well as presence of assay reagents and validity of the assay. The detection scheme utilized may be the same as that used to read out actual assay results. In another embodiment, the presence of sample, and optionally the presence of reagents as well as assay validity, may be deduced from data recorded to determine the assay results, with no need to perform measurements in addition to those being done to perform the assay.
Analytical methods are, for example, used to detect one or more changes in parameters, that can indicate incomplete liquid fill of a fluidic channel or sample chamber. In one embodiment, a cartridge and reader system are used to identify and/or count a certain particle type within a sample. By counting the particles in discrete locations (e.g., measurement fields) in a fluidic channel, count statistics can be used to identify changes indicative of incomplete channel fill. For example, a sudden change in particle count that exceeds a predetermined amount (e.g., empirically derived) may indicate incomplete channel fill. In another embodiment, a large percent coefficient of variation (% CV, defined as the ratio of sample standard deviation to the mean, expressed as %) for a series of measurement fields across a fluidic channel may indicate incomplete liquid fill. Therefore, in an embodiment, count % CV across a fluidic channel is compared to a Poisson-limited % CV. A count % CV that exceeds the Poisson-limited % CV by a given amount can be interpreted as an incomplete channel fill. A demonstration of this embodiment is now provided.
Tests were performed on a system that included a cartridge configured for detection of T-helper cells in a whole blood sample, and an instrument for identification and counting of the T-helper cells in the cartridge through fluorescence imaging. The cartridge included a fluidic channel with a detection region, such as the cartridges described herein. A blood sample was provided; before the blood sample entered the detection region, the sample was mixed with an immunostain including anti-CD3 antibodies labeled with Phycoerythrin (PE) and anti-CD4 antibodies labeled with Alexa647 (A647). The instrument recorded PE fluorescence images and A647 fluorescence images of twelve measurement fields along the fluidic channel. The images were analyzed using parts or all of the software routine described in
These tests demonstrated the use of image analysis to determine that sufficient sample and/or detection reagent was added to a fluidic channel in an assay cartridge. In an experiment, three cartridges were imaged. Cartridge A was known to have a properly filled detection region, whereas cartridges B and C were intentionally under-filled such that some measurement fields contained no sample.
Numerous methods may be used to identify sudden value changes as well as their location. In the present experiment, the presence of a sudden change in the number of T-helper as a function of channel position was identified by calculating the coefficient of variation (% CV) for the T-helper cell count. Cartridges A, B, and C had % CVs of 16%, 61%, and 223%, respectively. For comparison, the Poisson limited % CV is 11%, e.g., the expected % CV for a measurement subject only to counting statistics type errors will average 11%. Clearly, cartridges B and C had abnormally large % CVs, indicating a partially filled channel.
Further analysis was performed in order to locate determine if the investigation revealed only a single, sudden T-helper cell count drop, or if the large % CVs for cartridges B and/or C were caused by highly variable T-helper cell counts. For measurement fields 2-12, a relative change compared to the preceding measurement field was calculated as [count(i)-count(i-1)]/[count averaged over all measurement fields], where i is the measurement field number. The results are shown in Table 1 below.
Since an empty measurement field is expected to lead to a near-zero count, a partially filled channel was diagnosed by relative changes smaller than −100% and/or greater than +100%. Cartridge A showed no such changes, while cartridges B and C showed a relative change smaller than −100 at measurement fields 10 and 3, respectively. It was deduced from these results that the detection region of cartridge A was properly filled while the detection region of cartridges B and C were filled only through measurement fields 9 and 2, respectively.
A similar method may be applied to the average PE signal to determine a cartridge underfill condition. In the case of the PE signal, a criterion that can be used to determine underfill is a −12% relative change in PE signal from one measurement field to the next.
Another embodiment of the cartridge is provided describing a device (e.g., cartridge) for analyzing an analyte in a sample. The device may include at least a first substrate, a second substrate, a fluidic channel, an inlet port, and an outlet port. In one aspect, the first substrate and said second substrate each has an inner surface and an outer surface. The inner surface of the first substrate may form, at least in part, the lower wall of the fluidic channel, while the inner surface of the second substrate may form, at least in part, the upper wall of the fluidic channel. In another aspect, the fluidic channel is connected to both the inlet port and the outlet port. In another aspect, the fluidic channel includes at least a reagent region and a detection region, and at least a portion of the reagent region is coated with one or more dried reagents, which contain at least a detection molecule that binds the analyte in the sample. In another aspect, the device also contains a wicking pad located on the outer surface of the second substrate, and the wicking pad is positioned at a pre-determined distance from the outlet port. In another aspect, the reagent region is located between the inlet port and the detection region, such that the sample, when added to the inlet port, passes through the reagent region before entering the detection region. In another aspect, the analyte bound with the detection molecule may be detected in the detection region.
In another aspect, the one or more dried reagents may form a spatially uniform layer at the reagent region. In another aspect, the dried reagent coating may be distributed evenly along the width of the fluidic channel that is perpendicular to the sample flow path from the inlet port to the outlet port. This uniform layer may be formed by depositing liquid reagents onto the reagent region forming a plurality of single spots, and by allowing the plurality of single spots to merge before the liquid in each single spot evaporates. In another aspect, each of the single spots may receive from 1 to 50 nanoliters of liquid reagents, and the center-to-center spacing of the single spots and the volume deposited to each spot are collectively controlled to ensure that droplet-to-droplet contact occurs following deposition. In another aspect, the dried reagent coating may have a rehydration rate and physical dimension that collectively yield spatially uniform reagent-sample interaction within the detection region. The rehydration rate of the dried reagent coating may be determined by the reagent formulation and the composition of the sample. In another aspect, the dried reagent may contain an additive, such as sucrose, that slows the rehydration rate of the dried reagent coating.
In another aspect, the inlet port may have a volume greater than the volume of the fluidic channel, which may generate capillary action that facilitates movement of the sample from the inlet port to the fluidic channel. In another aspect, the walls of the inlet port, the walls of the fluidic channel, or both may be coated, either entirely or in part, with a hydrophilic layer.
In another aspect, the walls of the inlet port and the walls of the fluidic channel may be rendered hydrophilic by its building material, by the coating of the hydrophilic layer, or by other treatment of the building material such as plasma treatment, so that they have a water contact angle of less than 50 degrees, less than 40 degrees, less than 30 degrees, or less than 10 degrees.
In another aspect, the cartridge may have an internal tilt relative to a level orientation, which is sufficient to drive flow of the liquid sample from the inlet port to the outlet port. In another aspect, one, two, and/or three of the factors, namely, the tilt, the capillary action, and the wicking pad, may contribute to driving the flow of the liquid sample from the inlet port to the outlet port. In another aspect, the tilt is at an angle between 2 and 45 degrees relative to a level orientation.
In another aspect, the distance between the wicking pad and the outlet port is sufficient to prevent the wicking pad from draining the fluidic channel. In another aspect, the wicking pad is made of a material having a wicking rate of between 10 and 200 seconds per 4 centimeters (cm) of the material. In another aspect, the wicking pad is made of a material having a certain absorbance rate, wherein the surface tension of the liquid sample emerging from the outlet port breaks the fluidic connection between the wicking pad and the outlet port when the absorbance rate exceeds the rate at which the liquid sample emerges from the outlet port, thereby preventing further fluidic flow from the outlet port to the wicking pad. In another aspect, the distance between the wicking pad and the outlet port is between 1 and 5 mm.
In another aspect, the detection region includes a plurality of capture molecules bound to the inner surface of the first substrate. In another aspect, the plurality of capture molecules are arranged as an array including at least two reaction sites, each of the at least two reaction sites being formed by depositing a composition onto the inner surface of the first substrate, wherein the composition contains at least one of the capture molecules. In another aspect, binding of the dried reagent to the analyte in the sample does not prevent binding of the same analyte to the plurality of capture molecules. Details of the capture molecules and the array may be found in U.S. patent application Ser. No. 13/233,794 as filed on Sep. 15, 2011, which is incorporated herein by reference in its entirety.
As is the case for system 100 (
In a step 6840, sequentially captured first, second, and third images are processed to determine a spatial shift between the first and second images. The first and second images are associated with two different colors of excitation light, and the third image is associated with the same color of excitation light as the first image. For example, step 6840 is implemented as determinespatialshift 6720 (
In a step 6850, events in the first image are correlated with events in the second image, while accounting for the spatial shift between the first and second images, which is determined in step 6840. For example, step 6850 is implemented in correlatesourceimages 6710 and movementcompensation 6715, such that processor 460 (
In an embodiment, method 6800 further includes sequential steps 6810, 6820, and 6830 of capturing the first, second, and third image, respectively. In a step 6810, the first image is captured using excitation light of a first color. For example, controller 6750 uses illumination module 200 and sensor 160 to capture the first image. In a step 6720, the second image is captured using excitation light of a second color. For example, controller 6750 uses illumination module 300 and sensor 160 to capture the second image. In a step 6710, the third image is captured using excitation light of the first color. For example, controller 6750 uses illumination module 200 and sensor 160 to capture the third image. Steps 6810, 6820, and 6830 are an embodiment of steps 840, 845, and 850 of method 800 (
In a step 6910, the spatial shift between the first and third images is evaluated. In a step 6920, the spatial shift between the first and second images is deduced from the spatial shift between the first and third images. Since the first and third images are generated under optical conditions having the same spatial properties, a spatial shift between the first and third images may be attributed to sample movement.
In an embodiment, step 6920 includes a step 6925, wherein the spatial shift between the first and second images is interpolated from the spatial shift between the first and third images, assuming that the movement occurs at a uniform rate. Knowing the difference between capture times of the first, second, and third images, and assuming uniform movement, allows for determining the spatial shift between the first and second images by interpolating from the spatial shift between the first and third images. For example, if the first, second, and third images are evenly spaced in time, the spatial shift between the first and second images is determined as half the spatial shift between the first and third images.
In a step 7010, events are identified in the first image. In an embodiment, step 7010 includes a step 7015, wherein the first image is processed according to method 1300 (
In a step 7030, the spatial shift between the first and third images is determined from individual spatial shifts between events identified in the first and third images, where an individual spatial shift is the spatial shift between an event in the first image and a corresponding event in the second image. In an embodiment, step 7030 includes a step 7035. In step 7035, the nearest neighbors among events in the third image are identified, for each event in the first image. In an embodiment, the number of nearest neighbors used depends on the number of events identified in the first and/or third images.
In a step 7110, method 7100 performs step 7010 of method 7000 (
In an embodiment, step 7130 is performed using a k-dimensional (k-d) tree data structure, as discussed by Bentley in J. L. Bentley, “Multidimensional binary search trees used for associative searching”, Communications of the ACM, 18(9):509-517, 1975. A k-d is a space-partitioning data structure for organizing points in a k-dimensional space. k-d trees are a useful data structure for several applications, such as searches involving a multidimensional search key (e.g. range searches and nearest neighbor searches). k-d trees are a special case of binary space partitioning trees. A k-d tree is a binary tree in which every node is a k-dimensional point. Every non-leaf node can be thought of as implicitly generating a splitting hyperplane that divides the space into two parts, known as half-spaces. Points to the left of this hyperplane are represented by the left subtree of that node and points right of the hyperplane are represented by the right subtree. The hyperplane direction is chosen in the following way: every node in the tree is associated with one of the k-dimensions, with the hyperplane perpendicular to that dimension's axis. So, for example, if for a particular split the “x” axis is chosen, all points in the subtree with a smaller “x” value than the node will appear in the left subtree and all points with larger “x” value will be in the right subtree. In such a case, the hyperplane would be set by the x-value of the point, and its normal would be the unit x-axis. In this embodiment, step 7130 utilizes the following variation of the k-d tree structure initially disclosed by Bentley. All the points to be searched are contained in the leaf nodes, and each internal node of the tree describes one axis (x or y) and the split location, where the “points” refer to events in the second set of events. All leaf nodes on the left side of the internal node will have a value of the indicated axis less than or equal to the split value. Likewise, all leaf nodes on the right side of the internal node will have a value of the indicated axis greater than or equal to the split value. For each internal node, the split axis is determined by identifying the axis that has the largest spread. The split value is chosen such that the leaf nodes under the internal node are distributed equally between the left and right sides.
In step 7140, N vectors are calculated, where the N vectors represent the spatial shift between the event in the first set of events and a respective event in the second set of events. In a step 7150, a collection of vectors calculated in step 7140 is generated. The collection may be, for example, plotted or tabulated.
In a step 7160, the center value of the collection of vectors is determined. In another embodiment, the center value is determined as a weighted average of the vector collection. For example, first nearest neighbors may be assigned greater weight than more distant nearest neighbors. Referring to plots 7200 and 7300 of
In an embodiment, the center value is determined using a k-d tree data structure. First, a k-d tree of the movement point cloud is constructed. The implementation choice of confining the data points to the leaf nodes of the k-d tree allows for an estimate of the density of points to be easily calculated. An estimate of the cluster location is calculated by computing the center of mass of the movement point cloud where each point is weighted by its density (as determined by the k-d tree). The location of the cluster is then refined by doing a range-search on the k-d tree of the points near the estimated cluster location. The center of mass of the points returned by the range-search is then computed. The range searched in this range search is limited to a search range defined by a distance from the initial estimate of the cluster location. In an example, this distance is chosen large enough to fully encompass the cluster but sufficiently small that the center-of-mass calculation is not skewed by too many extraneous points that are outside the cluster. In an embodiment, a check is performed to confirm that a cluster is identified. It is required that a certain fraction of the events in the first image have a nearest-neighbor within the range-search distance of the identified cluster center. In an example, this fraction is set at 50%, which allows for method 7000 (
The deposition and drying of assay reagents into a fluidic cartridge enables simple-to-use products that require minimal user steps. For example, the deposition and drying of biological reagents such as antibody stains into fluidic cartridges may eliminate the need for subsequent reagent additions providing a major advantage over multi-step assays in terms of ease-of-use, error rates, reproducibility etc. Stabilization of biological reagents such as proteins, antibodies, enzymes, etc. is well established in the art, but typically requires optimization of the drying or lyophilization formulation, which often includes very specific salt conditions, stabilizers, excipients etc. In some biological assays, it may desirable to immobilize biological reagents that have different optimal drying formulations. Disclosed herein is a concept of spatially or physically separating different components of dried formulation, allowing different formulations for different components. By using a micro-dispensing tool, liquid reagents may be deposited in a grid or pattern in which different formulation components are physically separated on the micro-scale, but remain in close proximity in the macro-scale. In an embodiment, “micro-scale” is a distance scale less than approximately one millimeter, while “macro-scale” is a distance scale greater than one millimeter, characteristic of the size of a liquid droplet volume greater than five microliters in contact with a solid surface. For example, deposition of nanoliter-scale droplets, such as droplets with volumes in the range between 300 picoliters and 50 nanoliters, enables multiple different formulations to be arrayed in an area of a few square millimeters.
In many applications, it is desirable to deposit assay reagents that deliver different functionalities. In whole blood immunoassays, for example, a first reagent may comprise dye-labeled antibodies (antibody stain), while a second reagent may comprise an anti-coagulant. In some cases, it may be possible to incorporate all assay reagents (e.g., anticoagulants, stabilizers, excipients, antibodies, etc.) into a single formulation for deposition. Some components of the formulation, however, may be incompatible with other components when dried, but may still be required for assay performance after rehydration. For example, the addition of a large fraction of EDTA in a dried antibody formulation has been shown to degrade antibody activity during storage. Elimination of the EDTA may improve shelf life of dried antibody formulation, but the EDTA may be a required anticoagulant for proper blood behavior in the fluidic device.
In a step 7410, a plurality of mutually incompatible liquid reagents are deposited in a respective plurality of mutually separated areas. The reagents are deposited in the fluidic assay cartridge, or on a component thereof before final assembly of the cartridge. In a step 7420, the plurality of mutually incompatible reagents deposited in step 7410 are dried to form the dried reagents.
Droplet arrays 7710, 7721, 7722, 7723, and 7724 may include fewer or more spots arranged in other configurations than illustrated in
In an example, antibody stain and EDTA are physically deposited in close proximity, but are not in physical contact, allowing optimal performance of each component. In this specific example, the sample is whole blood collected direct from finger stick, and the biological stain is a dye-labeled anti-CD3 antibody. Deposition of the antibody stain corresponds to droplet array 7710 in fluidic assay cartridge 7700 (
Features described above as well as those claimed below may be combined in various ways without departing from the scope hereof. The following examples illustrate possible, non-limiting combinations of embodiments described above. It should be clear that many other changes and modifications may be made to the methods and apparatus herein without departing from the spirit and scope of this invention:
The changes described above, and others, may be made in the particle identification systems, cartridges and methods described herein without departing from the scope hereof. It should thus be noted that the matter contained in the above description or shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall therebetween.
This application is a continuation-in-part of, and claims priority to, U.S. patent application Ser. No. 13/831,757, filed 15 Mar. 2013, and which claims priority to U.S. Provisional Patent Application Ser. No. 61/719,812, filed 29 Oct. 2012, and U.S. Provisional Patent Application Ser. No. 61/732,858, filed 3 Dec. 2012. The above-identified patent applications are incorporated herein by reference in their entireties.
This invention was made with Government support under NIH Grant Nos. AI070052 and AI068543, both awarded by the National Institutes of Health. The Government has certain rights in this invention.
Number | Date | Country | |
---|---|---|---|
61732858 | Dec 2012 | US | |
61719812 | Oct 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13831757 | Mar 2013 | US |
Child | 14208251 | US |