This patent document relates to systems, devices and techniques for particle sorting, and in particular low-latency image-activated particle sorting based on AI gating.
Flow cytometry is a technique to detect and analyze particles, such as living cells, as they flow through a fluid. For example, a flow cytometer device can be used to characterize physical and biochemical properties of cells and/or biochemical molecules or molecule clusters based on their optical, electrical, acoustic, and/or magnetic responses as they are interrogated in a serial manner. Typically, flow cytometry uses an external light source to interrogate the particles, from which optical signals are detected caused by one or more interactions between the input light and the particles, such as forward scattering, side scattering, and fluorescence. Properties measured by flow cytometry include a particle's relative size, granularity, and/or fluorescence intensity.
Particle sorting, including cell sorting at the single-cell level, has become an important feature in the field of flow cytometry as researchers and clinicians become more interested in studying and purifying certain cells, e.g., such as stem cells, circulating tumor cells, and rare bacteria species.
The technology disclosed in this document can be implemented to provide methods, devices and systems for producing images of particles in a flow system, and in specific configurations, the disclosed technology can be used for imaging particles in real time and subsequently sorting particles, including cells, based on a trained gating model and image data of individual particles. The disclosed techniques can be applied for producing cell images and sorting cells in flow cytometers in real time. In applications, the disclosed technology can be used to detect and sort cells based on the bright field signals, fluorescent signals and/or scattering intensity.
In implementations, for example, the disclosed systems possess the high throughput of flow cytometers and high spatial resolution of imaging cytometers, in which the particle images are produced at a fast enough rate to accommodate real-time particle sorting in a flow system based on machine ascertainable physical and/or physiological properties of the particle represented in and image data and analyzed using an AI based gating model.
In some embodiments in accordance with the present technology, a particle flow device including a substrate, a channel formed on the substrate operable to allow individual particles to flow along a flow direction to a first region of the channel, and two or more output paths branching from the channel at a second region proximate to the first region in the channel, an imaging system interfaced with the particle flow device and operable to obtain image data associated with a particle when the particle is flowing in the first region through the channel, a control command unit including a processor configured to produce a control command indicative of a particle class determined based on a gating model and the image data; and an actuator operatively coupled to the particle flow device and in communication with the control command unit, the actuator operable to direct the particle into an output path of the two or more output paths based on the control command, wherein the image-activated particle sorting system is operable to sort the individual particles during flow in the channel.
In some embodiments in accordance with the present technology, a method for image-based sorting of a particle includes obtaining, by an imaging system interfaced with a particle flow device, image data of a particle flowing through a channel of the particle flow device; producing, by a control command unit, a control command indicative of a particle class of the particle determined based on a gating model and the image data; and directing the particle into one of a plurality of output paths of the particle flow device based on the control command.
The above and other aspects of the disclosed technology and their implementations and applications are described in greater detail in the drawings, the description and the claims.
Image-based detection, classification, and sorting of target cells among a heterogenous cell population can bring phenomenal insight to biomedical research and application. Existing fluorescent-activated cell sorting (FACS) technology optically interrogates individual cells in a single-cell flow stream and isolates cells based on scattering and fluorescence intensity features of the interrogated cells. In comparison, image-activated cell sorting (IACS) systems can classify and sort cells based on spatial features obtained from cell images, which offer much greater information content than existing FACS that is limited to a single value per parameter. By extracting the spatial and morphological features carried by light transmission, scattering, and fluorescent properties of cells, IACS can classify and isolate the targeted cell types from a heterogeneous cell population using image-feature based gating (e.g., cellular size and shape, nuclear size, and shape, nucleus-to-cytoplasm ratio, DNA and RNA localization, cellular organelle localization, cellular aggregation, as well as non-intuitive features). The emergence of the image-activated cell sorting (IACS) technique provides a powerful biomedical research tool for studies of cell cycle, cell-cell interaction, protein localization, DNA and RNA localization, and the relationship between cellular phenotype to genotype. As used herein, IACS may also refer to image-activated particle sorting.
An existing IACS system may be configured to perform real-time data processing and sorting actuation to process high-content image data at a high data transfer rate and extract many image-related features based on which sorting decisions are made. The computing power of the processor of such an IACS system may limit the number of cell image features that can be extracted in real-time as many image-related features cause heavy computation. For example, cell phenotypical and morphological features can be complex and convoluted, not resolvable or correctly identifiable by human vision or some subjective criteria, partly because humans can only process a very small set of images out of a very large sample size. As a result, mathematical representations of image features driven by human-vision-based gating can have deficiencies and miss important biological insight. Additionally or alternatively, although latency of an IACS system may be improved by improving the hardware including, e.g., increasing the number and/or computing power of processors used in image data processing, improving camera-based optics design and hardware, etc., such solutions may suffer from limitations including, e.g., limited scalability due to cost and complexity, sensitivity and motion blur issues in the imaging process, or the like.
To address these and other technical problems, disclosed in this document include systems and methods for image-activated particle sorting employing an artificial intelligence (AI) based gating model with real-time AI inferencing capacity. Based on a deep learning algorithm and artificial intelligence (AI) computing hardware, convolutional neural networks (CNN) can solve complex image-driven pattern recognition problems that are human-vision uninterpretable. Some embodiments of the present documents provide measures for improving the AI-based gating model including, e.g., employing a suitable CNN model (e.g., a UNet CNN autoencoder model), optimizing a model parameter (e.g., identifying a kernel count of the initial convolutional kernels of the CNN model so as to comprehensively optimize training and performance including reducing the training time and/or sorting decision time, while maintaining a sorting accuracy), improving the training process by identifying image features for labelling images to be used as training data. According to some embodiments, a real-time sorting by AI inference with millisecond latency may be achieved using an example image-activated particle sorting system that includes one field-programmable gate array (FPGA) processor for image processing and a Personal Computer (PC) with a dedicated Graphics Processing Unit (GPU) for conducting real-time AI model inference based on an optimized UNet CNN autoencoder model.
In applications, the disclosed technology can be implemented in specific ways in the form of methods, systems, and devices for image-activated cell sorting in flow cytometry using (a) real-time image acquisition of fast travelling particles by efficient data processing techniques utilizing mathematical algorithms implemented with, e.g., FPGA and/or GPU, and concurrently (b) AI based “gating” techniques to make particle classification or sorting decisions based on such real-time acquired particle images as input. Unlike existing flow cytometers that use fluorescent intensities of chosen biomarkers as criteria for cell sorting, the methods, systems, and devices in accordance with the disclosed technology allow for various user-defined gating criteria for sorting labelled particles and also label-free particles in real time.
In some embodiments, an image-activated particle sorting system includes a particle flow device, such as a flow cell or a microfluidic device, integrated with a particle sorting actuator; a high-speed and high-sensitivity optical imaging system; and a real-time particle image processing and sorting control electronic system. For example, an objective of the disclosed methods, systems and devices is to perform the entire process of (i) image capture of a particle (e.g., cell), (ii) image reconstruction from a time-domain signal, and (iii) making a particle sorting decision and sorting operation by the actuator within a latency of less than 15 milliseconds to fulfill the needs for real-time particle sorting. In some implementations described herein, the total latency is less than 5 milliseconds (e.g., 3 milliseconds).
The system 100 implements image-based sorting of the particles in real-time, in which a particle is imaged by the imaging system 120 in the interrogation area and sorted by the actuator 140 in the sorting area 116 in real time and based on a determined property analyzed by the data processing and control unit 130. More descriptions regarding the imaging system 120 may be found elsewhere in the present document. See, e.g.,
In some embodiments, the data processing unit 125 may be implemented in a manner similar to the control command unit 130. In some embodiments, the data processing unit may be implemented on a processor different from the control command unit 130. For example, the data processing unit 125 may be implemented on an FPGA configured to generate particle images for individual particles based on image data acquired by the imaging system 120, while the control command unit 130 may be implemented on a GPU (e.g., a dedicated GPU) configured to determine particle classes for the individual particles by analyzing the particle images using the gating model, and/or control commands for the individual particles based on their respective particle classes. This example configuration may allow parallel processing of the image data and particle classification, thereby improving processing efficiency and reducing latency between the image data acquisition and the sorting decision or actuation, and allowing real time particle sorting while the particles flow through the particle flow device 110.
Implementations of the process 100A can be performed by the various embodiments of the image-activated particle sorting system including, e.g., systems 100, 200, 300 as illustrated in
The process 100A may include an operation 155 to obtain, by an imaging system (e.g., imaging system 120) interfaced with a particle flow device (e.g., the particle flow device 110), image data of a particle flowing through a channel (e.g., the channel 112) of the particle flow device. The particle may be labeled (e.g., fluorescently labeled) or label free. Particles may be hydrodynamically focused to the center of a microfluidic channel by a sheath flow in a microfluidic chip. The imaging system may optically interrogate individual particles in the single-particle core flow stream. In some embodiments, the imaging system may emit laser beams to scan individual particles when they individually traverse the interrogation area 114 of the channel 112 on the particle flow device. The imaging system may adjust at least one of the scanning range or the scanning speed to accommodate samples of different particle sizes for a suitable image field of view. For label-free particles, the image data may include bright field signals of the particles. For fluorescently labeled particles, the image data may include fluorescent signals. The signals detected by, e.g., PMTs and the temporal signals are reconstructed to form particle images via real-time processing by, e.g., a data processing unit (e.g., the data processing unit 125, a digitizer 260). The particle images may be two-dimensional images or three-dimensional images. More descriptions regarding the acquisition of the image data and the generation of particle images may be found elsewhere in the present disclosure. See, e.g., the description regarding the digitizer 260 and the digital signal processing (DSP) module 270 in
The process 100A includes an operation 165 to produce, by a control command unit (e.g., the control command unit 140), a control command indicative of a particle class of the particle determined based on a gating model and the image data of the particle during the particle flowing in the channel. The gating model may be a convolutional neural network (CNN) trained to conduct real-time AI inference regarding the particle class. The process 100A may include making a sorting decision or a control command based on the particle class. More descriptions regarding the gating model may be found elsewhere in the present document. See, e.g.,
The process 100A may include an operation to direct the particle into one of a plurality of output paths (e.g., output channels 118) of the particle flow device based on the control command. More descriptions regarding the particle direction may be found elsewhere in the present document. See, e.g., the actuator 140, the sorting module 290 illustrated in
Further precision in the optical interrogation process may be achieved by employing two 10× objective lenses (0.28 NA Plan Apo, Mitutoyo) situated on opposite sides of the microfluidic chip 250. The illumination laser may form a Gaussian beam with a focal depth of about 25 micrometers, and a full-width-half-maximum (FWHM) circular spot size of 1.6 micrometers at the object plane. The cells may be examined at this plane.
The system may make use of a number of dichroic mirrors (e.g., 210-1, 210-2), focusing lenses (e.g., 207-1 through 207-3) and band-pass optical filters (e.g., 211-1 through 211-3 as illustrated in
The laser scanning range and speed may be adjustable parameters in the system, allowing for the accommodation of samples with varying cell sizes. The maximum field of view that the system 200 may offer is 60×60 micrometers, and it may reach a maximum laser scanning speed of 350 kHz. This adjustability may provide considerable flexibility in the types of cells and samples the system 200 can process. More descriptions regarding the optic portion of the IACS system (system 100, 200, 300) may be found elsewhere in the present disclosure. See, e.g., FIGS. 4-6 and relevant description thereof.
The system 200 may further include a digitizer 260, a DSP module 270, an AI module 280, and a sorting module 290. The digitizer 260 may be configured to capture imaging data of individual particles as illustrated in panel (I) of
The actuator 140 of the system 200 may be configured in communication with the AI module 280 to gate the particle flowing in the sorting area 116 of the sample channel 112 into two or more outlet channels 118 of the microfluidic chip 250. In some embodiments, for example, the distance between the laser interrogation zone 114 and the sorting area 116 can be in a range of 50 micrometers to 1 mm. In implementations, the actuator 140 receives the control command from the AI module 280 in real time, such that the imaging system 120 (including the optical components and the digitizer 260 as illustrated in, e.g.,
Other examples of features of a particle flow device and/or an actuator that can be used in example embodiments of the devices, systems, and methods in accordance with the disclosed technology are provided in U.S. Pat. No. 9,134,221 B2 entitled “FLUIDIC FLOW CYTOMETRY DEVICES AND PARTICLE SENSING BASED ON SIGNAL ENCODING,” and U.S. Pat. No. 11,016,017 entitled “IMAGE-BASED CELL SORTING SYSTEMS AND METHODS,” the entire content of each of which is incorporated by reference as part of this disclosure for all purposes. Other examples of features of an optical imaging system that can be used in example embodiments of the devices, systems, and methods in accordance are provided in U.S. Pat. No. 10,267,736 entitled “IMAGING FLOW CYTOMETRY USING SPATIAL-TEMPORAL TRANSFORMATION,” the entire content of which is incorporated by reference as part of this disclosure for all purposes.
For example, the system 300 may include an optics module 265. More descriptions regarding the optics module 265 may be found elsewhere in the present document. See, e.g.,
The AI module 280 may be part of a standalone multi-core PC workstation equipped with a dedicated Nvidia GPU module. In addition, the AI module 280 may provide a Graphical User Interface (GUI) configured to display reconstructed images at 280-1. In some embodiments, the AI module 280 may provide two operating modes for user convenience and a mode selection at 280-2. If a sorting mode is not selected at 280-2, the AI module 280 proceeds with an analysis mode, under which the AI module 280 may save at 280-3 the image data to internal or external solid-state storage disks for offline image processing and AI model training. In contrast, if a sorting mode is not selected at 280-2, the AI module 280 proceeds with the sorting mode. In some embodiments, a user may define sorting criteria (i.e., cell class, confidence level), and the AI module 280 may use a pre-trained gating model to conduct real-time inference to automatically classify the particles, along with a prediction confidence level. Finally, a sorting module 290 may be triggered by the AI module 280 at 290-1 based on the generated sorting decision, which then may trigger an on-chip Piezoelectric Transducer (PZT) actuator to deflect the particle to user-defined downstream channels. An optical sorting verification detector may monitor the sorting outcome at 290 and may send a feedback signal, e.g., an optical sorting verification (OSV) signal, to the AI module 280 to display the sorting yield on the GUI. Merely by way of example, the real-time data processing software for this system may be developed using LabVIEW, featuring a customized Python Node that may call Python code for real-time AI inference in sorting mode.
In some embodiments, the system 300 may operate by sampling temporal waveforms using the digitizer 260 including an analog-to-digital converter (e.g., NI-5783, National Instruments). These waveforms are subsequently transferred to a field-programmable-gate-array (FPGA, PXIe-7975R, National Instruments) of the DSP module 270 for real-time particle image reconstruction utilizing, e.g., temporal-spatial transformation. Reconstructed particle images are then channeled to a standalone PC workstation, hereafter referred to as the AI module 280, via a wide-band PCIe bus. This dedicated AI module 280, equipped with a GPU (Quadro RTX A6000, Nvidia), executes real-time AI inference using a gating model. The AI module 280 is designed to predict particle classes (e.g., cell types) and may assign each AI inference prediction with a corresponding confidence level. Sorting decisions are taken by comparing the AI inference prediction with user-specified particle classes and the assigned classification confidence level. To account for the process's latency, the system 300 includes a clock mechanism that records the duration of the process. If the cumulative processing time is within a preset value, the sorting action is activated, and the sorting decision is transferred via the PCIe bus to the FPGA. The FPGA then controls the on-chip piezoelectric actuator's function, executing the sorting action. In example implementations, the entire data processing operation, which includes AI model inference and PZT actuation, is concluded in less than 3 milliseconds for 99% of the cells in samples, achieving a swift and efficient cell sorting.
In some embodiments, the IACS system described herein may include an imaging apparatus capable of generating both transmission and fluorescent images of particles moving through a microfluidic channel at an approximate velocity of 20 cm/s.
Panel a of
The efficacy and versatility of this system disclosed herein have been evidenced in various applications. For instance, it can accurately capture images of both single and multiple fluorescent beads measuring 7 micrometers and 15 micrometers (see
In some embodiments, the optics module 265 may including one or more filters. An example of two-dimensional spatially-varying spatial filter is provided in U.S. Pat. No. 9,074,978 B2 entitled “OPTICAL SPACE-TIME CODING TECHNIQUE IN MICROFLUIDIC DEVICES”, the entire content of which is incorporated by reference as part of this disclosure for all purposes. Additional descriptions of filters suitable to be used in the IACS system disclosed herein may be found in, e.g., U.S. Pat. No. 11,016,017 entitled “IMAGE-BASED CELL SORTING SYSTEMS AND METHODS,” the entire content of each of which is incorporated by reference as part of this disclosure for all purposes.
The system's detection optics resolution limit was measured using a high-resolution optical test target, model HIGHRES-1 from Newport. The measured spot size, illumination beam depth of focus, and detection optics resolution limit are presented in
The AI based gating model for real-time data processing may be trained using a Convolutional Neural Network (CNN) model training process. For example, the training system for training a CNN model may utilize a custom MATLAB image preprocessing code to extract conventional image features, leading to the generation of human interpretable image features and a preprocessed image dataset. FCS Express software may be employed to import the list of these extracted features, enabling the user to define gating to select targeted image data for the CNN model training. The selected image indices may be exported by FCS Express, which are then prepared for CNN model training via a MATLAB code. Table 1 lists the image features extracted by the MATLAB program. The average processing time for a dataset comprising 20,000 images may be between 5 to 10 minutes with this exemplary approach.
To expedite the CNN model training and achieve low-latency model inference for sorting, a customized 2D UNet may be developed.
In this example as illustrated in
where x is the input vectors, C is the number of particle classes.
Additionally, the system may comprise a fully connected layer and a Softmax layer that may collaboratively function as a classifier. The fully connected layer may capture high-level features from the output of the upsampling path, condensing them into a feature vector. The Softmax layer may then process this vector, producing probabilities for each class in the classification task. This combination may ensure accurate and probabilistically nuanced class assignments for the input images. In the model training process, a weighted loss may be used, incorporating mini-batch averaged cross-entropy loss and mean-square error loss between input and generated output image pixel values. The total loss may be balanced through a weight coefficient. This process may provide an effective method to manage classification error. The averaged cross-entropy loss LCE can be expressed as
where yi is the ground truth class vector, ŷi is the predicted class vector, and N is the data size in the mini-batch. The mini-batch averaged mean-square error loss LMSE can be expressed as
where x and x are the input image and generated image vectors, respectively, M is the flattened image vector dimension, and N is the data size in the mini-batch.
The weighted total loss L is defined as
where w is the weight coefficient to balance the loss function.
The UNet model architecture may be optimized by conducting a CNN model architecture search, aiming to reduce the initial convolutional kernel number in the UNet model. As part of the model optimization, a stratified 5-fold cross-validation (CV) approach may be employed to train and assess the performance of the UNet models. In this process, for each fold, the training data may be augmented by conducting random horizontal and vertical flips on the image data. Following the augmentation, the model may then be validated using instances from the validation set. The performance of the model may be evaluated by calculating the balanced classification accuracy, an accuracy metric that may not favor classifiers that exploit class imbalance by biasing toward the majority class. The balanced accuracy
where σi is the class-specific accuracy, and C is the number of particle classes. In an example training, a dataset including 15,000 images obtained from a white blood cell imaging experiment were employed to carry out this model architecture search. To maintain a balanced data occurrence, 5,000 cell images were used for each cell type. Additionally, to examine the impact on inference time utilizing different GPU acceleration frameworks, a comparative analysis between Pytorch and TensorRT frameworks were performed during the UNet architecture search. In some embodiments, deep learning model training and performance tests may be conducted on the same computer system or different computing systems, situated within the AI module of the low-latency IACS system. The deep learning development may be performed under specific frameworks including, e.g., Python 3.6.8, Pytorch 1.10.2, and TensorRT 8.2.2.1. The use of these frameworks may contribute to the efficiency and functionality of the model training and optimization processes, offering robustness and reliability to the overall system.
In example implementations, through an extensive CNN model architecture search, a UNet CNN model may be optimized for 2-part or 3-part particle classification. The results indicate that the initial convolutional kernel number (or kernel count) may significantly affect the model size, parameter number, training time, and inference time, while having a relatively low impact on the model prediction accuracy within our system. Merely by way of example, by reducing the initial convolutional kernel count from 64 to 4, the model parameter and model size by approximately may be reduced by 100-fold as illustrated in
The described system represents a novel approach for optimizing a CNN model architecture, e.g., the UNet CNN model, to conduct efficient particle sorting based on imaging data of individual particles. By reducing the initial convolutional kernel number, the system achieves substantial improvements in model parameter reduction, model size reduction, training time reduction, and inference time reduction. These advancements result in enhanced system efficiency and real-time performance, while maintaining high model prediction accuracy. The optimizations made using the TensorRT framework further contribute to the reduction of model inference time, ensuring low sorting latency for real-time CNN inference. The detailed information provided in this description, along with the supplementary figures, demonstrates the novelty and effectiveness of the system in particle classification applications.
Two types of sorting experiments, beads sorting experiments and WBC sorting experiments were conducted to demonstrate the disclosed system's significant capabilities. The beads sorting experiments showcase the sorting of beads of targeted size from a mixture of 7-micrometer and 15-micrometer polystyrene (PS) fluorescent microsphere beads. To illustrate the system's competence in cell sorting, a second experiment, sorting 3-part human white blood cells (WBCs), were performed to segregate the targeted WBC type from leukocyte samples.
In the beads sorting experiment, separate 7-micrometer and 15-micrometer PS beads samples were prepared and processed through the system in an analysis mode to accumulate a training dataset. This dataset included a total of 4,000 images, evenly divided between the two bead sizes. A two-part image classification training was conducted to train the UNet model, utilizing an 80/20 train/validation split of the dataset. Performance evaluation of the pre-trained model was carried out via generation of a confusion matrix of the classification results from the validation dataset. Upon achieving a prediction confidence level of over 99% from the AI based gating model, the pre-trained CNN model was deployed for bead sorting. Sorted 15-micrometer PS beads from the mixture were analyzed using a commercial flow cytometer (Accuri C6 plus, BD Biosciences), and the outputs from the sorting and waste channels were examined under a fluorescence microscope after enrichment via centrifugation. In the bead sorting experiment, the training progress and classification performance of the pre-trained CNN model were evaluate. The model training process completed within 540 seconds using a dataset of 4000 images for 2-part classification. The pre-trained CNN model achieved a balanced prediction accuracy of 100%. The t-SNE visualization demonstrates distinct separation of the 7-micrometer and 15-micrometer bead clusters (panels a and b of
In the WBC sorting experiments, WBC samples were immunostained with an antibody panel to provide ground truth labels for each cell type. The immunostained WBC samples were processed to derive the training dataset, encompassing a total of 17,876 cell images. The UNet model was trained via a three-part image classification process using an 80/20 train/validation split of the stratified dataset. With an AI model prediction confidence level exceeding 99%, the pre-trained CNN model was deployed for cell sorting. For each sorting experiment, the target cell type was fluorescently labeled with one color and other cell types with another color using the antibodies panel for the sole purpose of post-sorting performance evaluation (Table 4) while the AI inference and the sorting decision were entirely based on the label-free, transmission images.
To confirm the system's performance, the sorted and waste samples were analyzed using a commercial flow cytometer (Accuri C6 plus, BD Biosciences), and fluorescent signals of each WBC cell type were utilized to assess the sorting purity. The sorting purity measurement was evaluated by counting the ratio between the sorted target particle number and the total sorted particle number described by Eqn. (6):
where Ntarget is the sorted target particle number (or referred to as particle count) and Nnon-target is the sorted non-target particle number (or referred to as particle count). During the sorting experiment, the event processing time to evaluate the sorting latency were observed and recorded. In some embodiments, label-free white blood cell (WBC) classification and sorting have significant advantages over biomarker labeling approaches in terms of avoiding cell degradation and minimizing morphological changes. Existing CNN-based label-free WBC classification systems lack real-time AI inferencing capabilities for sorting. However, the customized UNet CNN model as disclosed herein demonstrates high-accuracy and feasible 3-part WBC type classification. The CNN model training completed within 40 minutes using an approximate training set of 18,000 images. The pre-trained CNN model yielded a balanced classification accuracy of 99.5% for 3-part WBC type classification. The t-SNE visualization demonstrates well-separated clusters of the cell groups (
Panel a of
In the AI-inferred WBC sorting experiments, a relatively small number of cells were sorted to separate lymphocyte, monocyte, and granulocyte groups. More than 99% of the sorting events processed within 2.312 milliseconds, with average data processing times ranging from 1.687 to 1.834 milliseconds (panels d-f of
In some embodiments, the system incorporates various sample preparation techniques to assess the system's capabilities in imaging and sorting fluorescent polystyrene particles, CHO-ES cells, MCF7 cells, human iPSC (induced pluripotent stem cells), and human white blood cells.
Fluorescent Polystyrene Particles Preparation The system evaluates the imaging and sorting performance of the low-latency IACS, utilizing fluorescent PS beads. A 1:6 mixture of 15 μm PS particles (Fluorescent microspheres, Dragon Green, Cat. No. FSDG009, Bangs Laboratories, Inc.) and 7 μm PS particles (Fluorescent microspheres, Dragon Green, Cat. No. FSDG007, Bangs Laboratories, Inc.) is introduced from the sample inlet of a microfluidic chip. The system adjusts the concentration of these particles to 500 particles μL−1.
CHO-ES Cells and DNA Staining Preparation The system uses CHO-K1 cells (ATCC CCL-61) for DNA staining. The cells are harvested at a confluency of approximately 80%. The harvested cells undergo centrifugation at 350×g for 5 minutes, the supernatant is removed and the cells are washed with PBS (Genesee Scientific, CA, USA). This washing process is repeated, after which 100 μL of 4% formaldehyde, methanol-free (Cell Signaling Technology, Massachusetts, USA) is added per million cells. Following incubation for 15 minutes at 37° C., the fixed cells are washed and resuspended in PBS containing 0.5% BSA (Thermo Scientific) at a concentration of 1.0×106 cells/mL. Lastly, the cells are stained with 0.5 μM Vybrant DyeCycle Green Stain (Invitrogen) for 30 minutes and filtered using a 35 μm strainer cap (Genesee Scientific, CA, USA).
MCF7 Cells and Mitochondrial Staining Preparation MCF7 cells (ATCC HTB-22) are prepared for imaging their mitochondria. The cells, harvested at a confluency of 70%, are diluted to a concentration of 1.0×106 cells/mL using a buffer composed of PBS, 0.5% BSA, 12.5 mM HEPES (Gibco), and 5 mM EDTA (Invitrogen). The diluted cells are stained with 100 mM of Mito View Green (Biotium, San Francisco, USA) and incubated for 15 minutes at 37° C. Post incubation, the cells are filtered with a 35 μm stainer cap and analyzed.
Human iPSC Cells and Viability Staining Preparation Human iPSCs reprogrammed from fibroblasts are cultured in DMEM/F-12 50/50 1× (Corning™, #10-092-CM) supplemented with HiDef B8 500X (Defined Bioscience, #LSS-201). Non-TC treated 6-well plates (CELLTREAT, #229506) are treated with vitronectin (Gibco™, #A14700), a recombinant human protein that provides a defined surface for feeder-free culture. Samples are maintained with a visual assessment of less than 30% differentiation per well. Cells are passaged in aggregates ranging from 50-100 μm, using the enzyme-free Gentle Cell Dissociation Reagent (STEMCELL Technologies, #100-0485). Healthy iPSC colonies are identified by morphology under phase microscopy for colony compactness, defined borders, well-outlined edges, and a large nucleus-to-cytoplasm ratio. A single-cell suspension is obtained using Accutase® (Innovative Cell Technologies, Inc. #AT104), centrifuged at 200×g for 3 minutes, and resuspended in sheath buffer (basal media+10% Accutase) at a concentration of 3.0×105 cells/mL. Live calcein AM (Invitrogen™, #C3099) stained iPSCs are imaged by capturing conversion of the green, fluorescent calcein (Ex/Em: 494/517 nm).
Human White Blood Cells and Immune Staining Preparation The system employs the Veri-Cells™ Leukocyte Kit, prepared from lyophilized human peripheral blood leukocytes (BioLegend Cat. 426003). These cells work with commonly tested cell surface markers such as CD3, CD14, CD19, and CD66b. CD66b is a glycosylphosphatidylinositol (GPI) linked protein expressed on granulocytes, CD3 and CD19 are expressed on T-cell and B-cell, respectively, and CD14 is expressed at high levels on monocytes. The system uses various combinations of specific antibodies listed in Supplementary Table 2 for leukocyte phenotyping. The concentration of the particles is adjusted to be between 500 and 1000 particles μL−1 to achieve an event rate of approximately 100-200 events per second (eps).
The described system for image acquisition and sorting provides a comprehensive approach for sample preparation in various experiments. The system demonstrates its capability to handle and analyze different types of samples, including fluorescent particles, cells stained with specific dyes, and immune-stained human blood cells. This enables the evaluation of the system's performance in imaging and sorting diverse biological samples, showcasing its versatility and potential applications in the field.
The following examples are illustrative of several embodiments in accordance with the present technology. Other exemplary embodiments of the present technology may be presented prior to the following listed examples, or after the following listed examples.
In some embodiments in accordance with the present technology (example A1), a particle flow device including a substrate, a channel formed on the substrate operable to allow individual particles to flow along a flow direction to a first region of the channel, and two or more output paths branching from the channel at a second region proximate to the first region in the channel, an imaging system interfaced with the particle flow device and operable to obtain image data associated with a particle when the particle is in the first region during flow through the channel, a control command unit including a processor configured to produce a control command indicative of a particle class determined based on a gating model and the image data; and an actuator operatively coupled to the particle flow device and in communication with the control command unit, the actuator being operable to direct the particle into an output path of the two or more output paths based on the control command, wherein the image-activated particle sorting system is operable to sort the individual particles during flow in the channel.
Example A2 includes the system of any one of examples herein, in which the control command is produced when the particle is flowing through the channel.
Example A3 includes the system of any one of examples herein, in which a latency from a first time of image capture of the particle to a second time of the particle being directed by the actuator is within a time frame of 15 milliseconds or less. For example, the latency is less than 10 milliseconds, 8 milliseconds, 6 milliseconds, 5 milliseconds, or 3 milliseconds.
Example A4 includes the system of any one of examples herein, in which the gating model is a machine learning model trained to predict the particle's class based on the image data.
Example A5 includes the system of any one of examples herein, in which the gating model includes a convolutional neural network (CNN) based Artificial Intelligence (AI) model.
Example A6 includes the system of any one of examples herein, in which a kernel count of initial convolutional kernels of the AI model is lower than 10 such that a training time to train the gating model using the processor of the control command unit is no more than 2 hours and a classification accuracy of the gating model for determining particle classes of the individual particles is at least 90%.
Example A7 includes the system of any one of examples herein, in which the individual particles are label-free, the imaging system is configured to obtain transmission images of the individual particles, and the control command unit is configured to generate control commands for the individual particles based on the gating model and the corresponding transmission images.
Example A8 includes the system of any one of examples herein, in which the imaging system includes one or more light sources to provide an input light to the first region of the particle flow device, and an optical imager to capture the image data from the particles illuminated by the input light in the first region.
Example A9 includes the system of any one of examples herein, in which the one or more light sources include at least one of a laser or a light emitting diode (LED).
Example A10 includes the system of any one of examples herein, in which the optical imager includes an objective lens optically coupled to a spatial filter, an emission filter, and a photomultiplier tube.
Example A11 includes the system of any one of examples herein, in which the optical imager further includes one or more light guide elements to direct the input light to the first region, to direct light emitted or scattered by the particle to an optical element of the optical imager, or both.
Example A12 includes the system of any one of examples herein, in which the light guide element includes a dichroic mirror.
Example A13 includes the system of any one of examples herein, in which the optical imager includes two or more photomultiplier tubes to generate two or more corresponding signals based on two or more bands or types of light emitted or scattered by the particle.
Example A14 includes the system of any one of examples herein, in which the imaging system includes a digitizer configured to obtain the image data that includes time domain signal data associated with the particle imaged in the first region on the particle flow device.
Example A15 includes the system of any one of examples herein, in which a data processing unit is in communication with the imaging system and the control command unit, the data processing unit being configured to process the image data obtained by the imaging system and output a particle image for the particle to be used as input to the gating model.
Example A16 includes the system of any one of examples herein, in which the control command unit comprises a first processor, and the data processing unit comprises a second processor that is different from the first processor.
Example A17 includes the system of any one of examples herein, in which the first processor comprises a graphics processing unit (GPU); and the second processor comprises a field-programmable gate-array (FPGA).
Example A18 includes the system of any one of examples herein, in which the particle flow device includes a microfluidic device or a flow cell integrated with the actuator on the substrate of the microfluidic device or the flow cell.
Example A19 includes the system of any one of examples herein, in which the actuator includes a piezoelectric actuator coupled to the substrate and operable to produce a deflection to cause the particle to move in a direction that directs the particle along a trajectory to the output path of the two or more output paths.
Example A20 includes the system of any one of examples herein, in which the particles include cells, and the one or more properties associated with a cell includes an amount or a size of a features of or on the cell, one or more sub-particles attached to the cell, or a particular morphology of the cell or portion of the cell.
Example A21 includes the system of any one of examples herein, in which the particles include cells, and the sorting criteria includes a cell contour, a cell size, a cell shape, a nucleus size, a nucleus shape, a fluorescent pattern, or a fluorescent color distribution.
Example A22 includes the system of any one of examples herein, in which the particles include cells, and the one or more properties associated with the cell includes a physiological property of the cell including a cell life cycle phase, an expression or localization of a protein by the cell, a damage to the cell, or an engulfment of a substance or sub-particle by the cell.
In some embodiments in accordance with the present technology (example A23), a method for image-based sorting of a particle includes obtaining, by an imaging system interfaced with a particle flow device, image data of a particle flowing through a channel of the particle flow device; producing, by a control command unit, a control command indicative of a particle class of the particle determined based on a gating model and the image data; and directing the particle into one of a plurality of output paths of the particle flow device based on the control command.
Example A24 includes the method of any one of examples herein, in which the control command is produced when the particle flows through the channel.
Example A25 includes the method of any one of examples herein, in which the gating model is a machine learning model trained to predict the particle class based on the image data.
Example A26 includes the method of any one of examples herein, in which the method includes allowing individual particles to flow through the channel; obtaining, by the imaging system, imaging data of the individual particles during flow through the channel; producing, by the control command unit, control commands indicative of particle classes of the individual particles that are determined based on the gating model and the image data of the individual particles while the individual particles flow through the channel; and directing the individual particles into the plurality of output paths of the particle flow device according to the control commands.
Example A27 includes the method of any one of examples herein, in which a latency between image capture of the particle and actuation of an actuator to direct the particle is within a time frame of 15 milliseconds or less. For example, the latency is less than 10 milliseconds, 8 milliseconds, 6 milliseconds, 5 milliseconds, or 3 milliseconds.
Example A27 includes the method of any one of examples herein, in which the gating model comprises a convolutional neural network (CNN) based Artificial Intelligence (AI) model.
Example A28 includes the method of any one of examples herein, in which the method further includes obtaining transmission images of the individual particles; and generating control commands for the individual particles based on the gating model and the corresponding transmission images.
In some embodiments in accordance with the present technology (example B1), a system includes a particle flow device structured to include a substrate, a channel formed on the substrate operable to flow individual cells along a flow direction to a first region of the channel, and two or more output paths branching from the channel at a second region proximate to the first region in the channel, an imaging system interfaced with the particle flow device and operable to obtain image data associated with a cell when the cell is in the first region during flow through the channel, a control command unit including a processor configured to produce a control command indicative of a cell class determined based on a gating model and the image data; and an actuator operatively coupled to the particle flow device and in communication with the control command unit, the actuator being operable to direct the cell into an output path of the two or more output paths based on to the control command, wherein the system is operable to sort the individual cells during flow in the channel.
In some embodiments in accordance with the present technology (example B2), a method for image-based sorting of a cell includes obtaining, by an imaging system interfaced with a particle flow device, image data of a cell flowing through a channel of the particle flow device; producing, by a control command unit, a control command indicative of a cell class of the cell determined based on a gating model and the image data; and directing the cell into one of a plurality of output paths of the particle flow device based on the control command.
In some embodiments in accordance with the present technology (example C1), a real-time image-activated particle sorting microfluidic system includes a cell sorting system including a microfluidic channel configured to allow one or more particles to flow therein in a first direction; an imaging unit including one or more lenses and an imaging detector operable to obtain image data as the one or more particles are flowing in the microfluidic channel; a processor including, or coupled to, an artificial intelligence system coupled to the imaging unit to receive the image data and to determine a class of the one or more particles; and a transducer coupled to the processor and to the cell sorting system, wherein upon determination that a first of the one or more particles is classified as having a particular particle class, the processor is configured to provide a signal to actuate the transducer to direct the first of the one or more particles to a first output of the microfluidic channel.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Additionally, the use of “or” is intended to include “and/or”, unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Various embodiments described herein are described in the general context of methods or processes, which may be implemented in one embodiment by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), Blu-ray Discs, etc. Therefore, the computer-readable media described in the present application include non-transitory storage media. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
For example, one aspect of the disclosed embodiments relates to a computer program product that is embodied on a non-transitory computer readable medium. The computer program product includes program code for carrying out any one or and/or all of the operations of the disclosed embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
This patent document is a continuation of International Application No. PCT/US2023/067943, filed Jun. 5, 2023, which claims priority to and benefits of U.S. Provisional Patent Application No. 63/365,836, entitled “IMAGE-ACTIVATED CELL SORTING USING DEEP LEARNING AND AI INFERENCING” filed on Jun. 3, 2022. The entire content of the aforementioned patent application is incorporated by reference as part of the disclosure of this patent document.
This invention was made with government support under grant no. 2R44DA045460-02 awarded by the National Institutes of Health (NIH). The government has certain rights in the invention.
| Number | Date | Country | |
|---|---|---|---|
| 63365836 | Jun 2022 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/US2023/067943 | Jun 2023 | WO |
| Child | 18960318 | US |