The present disclosure relates to microcapsule extraction and, more particularly, to a system and method for selective microcapsule extraction.
Microencapsulation of cells and tissues has become a very popular area of research for its ability to encase cells in a 3D environment which can be utilized in cell-based medicine including tissue engineering, regenerative medicine, and cell-based therapies. For example, microcapsules can be used to create a biomimetic environment for ovarian follicles and a biocompatible immunogenic barrier for stem cell and islet transplantation. Microfluidics has been widely explored for cell and tissue encapsulation because of its good controllability in terms of the size, morphology, and composition of the microcapsules. Microfluidic cell encapsulation is usually accomplished by shearing cell-laden aqueous fluids with a water immiscible fluid (e.g., oil). One major challenge is that the percentage of cell-laden microcapsules to total microcapsules generated is very small (<5%) due to the low cell density required for encapsulating only one piece of cell/tissue or cell aggregate per microcapsule, and in the case of pancreatic islets and ovarian follicles, the limited amount of these precious tissues that can be isolated. This large number of empty microcapsules makes downstream processing difficult, and in particular, undesired for cell/tissue transplantation for which the space available in vivo for housing the transplants is very limited. To remove empty microcapsules, sorting is needed. This can be done manually or by moving all the microcapsules to another device for sorting, which is tedious and may cause sample loss, contamination, and/or, cell damage. Another challenge is that oil is not favorable for cell viability. Therefore, an effective method for selective extraction of cell-laden microcapsules from oil into an isotonic aqueous phase timely on-chip can be very important to maintain the cell quality for use in clinical settings.
Several methods have been reported for extraction of microcapsules from oil into an aqueous phase based on the size and/or natural hydrophilicity of the microcapsule surface. However, these studies have not explored selective extraction of cell-laden microcapsules. Several studies reported optical detection of cell-laden microcapsules to selectively extract them. However, these methods require either fluorescently labeling the cells or squeezing the cell-laden constructs. The former may negate the utility of the cell-laden microcapsules for cell-based medicine while the latter may damage the cell-laden hydrogel microcapsules permanently. More recently, a label-free method was demonstrated for on-chip selective extraction of cell aggregate-laden microcapsules from oil into an aqueous solution using an optical sensor and dielectrophoresis (DEP). However, the purity is low: only ˜30% of the extracted microcapsules are cell-laden. In addition, cell aggregates less than ˜80 μm in diameter are not detectable with the optical sensor-based method.
With the development of high-capacity graphical processing units (GPUs) and the availability of big data, deep learning has risen to prominence as an accessible tool for many fields in recent years, especially in the field of image processing. Deep learning is a type of machine learning that uses neural networks (nodes connected by varying weights) to make predictions based on the input data. Supervised learning is a type of deep learning in which a program can be used to classify different objects that are input into the system, by first “learning” from pre-labeled objects that are used to train the program. Object classification can be very useful for biomedical applications, and deep learning can be used to detect subtle features in images, which may surpass human performance. More recently, deep learning has been used to detect live and dead cells, as well as different types of cells. However, nearly all prior studies have been focused on analyzing static images offline. Two studies have used machine learning to identify and separate droplets containing cells from droplets without cells on microfluidic devices in real-time, but these studies require complex imaging setups with special photodetectors, do not result in high purity and efficiency in sorting, and/or do not create hydrogel microcapsules for further processing. One study has examined sorting hydrogel microcapsules (containing no cells/tissues) of different sizes by deep learning. The capability of deep learning in real-time detecting cell/tissue-laden hydrogel microcapsules for their label-free on-chip selective extraction has not been explored so far.
Provided in accordance with aspects of the present disclosure is a system for selective microcapsule extraction including a non-planar core-shell microfluidic device. The non-planar core-shell microfluidic device generates microcapsules defining a core-shell configuration. A non-planar microfluidic device contains channels of varying heights, which are employed to create a core-shell microcapsule. A subset of the microcapsules contain aggregates, tissues, or at least one cell. A camera captures images of the microcapsules. A detection module includes a processor and a memory. The memory includes instructions that when executed by the processor cause the detection module to provide the images of the microcapsules as an input to a machine learning model. The machine learning model identifies microcapsules containing aggregates, tissues, or at least one cell. A force generator generates a force to extract the microcapsules. A microcontroller selectively activates the force generator to generate the force when the detection module identifies a microcapsule containing aggregates, tissues, or at least one cell to extract the microcapsule.
In an aspect of the present disclosure, a microcapsule generated by the non-planar core-shell microfluidic device defines a biomimetic environment containing a plurality of single cells, a plurality of cell tissues, and a plurality of cell aggregates.
In an aspect of the present disclosure, the biomimetic environment includes at least one hydrogel surrounding the aggregates, tissues, or cell(s).
In an aspect of the present disclosure, the microfluidic device includes at least two immiscible phases. The force generator is selectively activated by the microcontroller to extract microcapsules from a first phase to a second phase.
In an aspect of the present disclosure, a core of a microcapsule generated by the non-planar core-shell microfluidic device is surrounded by an inner wall of a corresponding microcapsule.
In an aspect of the present disclosure, the core is centered in an inner space of the corresponding microcapsule.
In an aspect of the present disclosure, the machine learning model includes a machine learning classifier, or a convolutional neural network.
In an aspect of the present disclosure, the extracted microcapsule includes ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells. The ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells are arranged as a single cell, multiple cells, or cell aggregates.
In an aspect of the present disclosure, the non-invasive force generated by the force generator includes an electrical force or an acoustic force.
In an aspect of the present disclosure, the camera is a digital camera, a high speed digital camera, a camera embodied in a smartphone, or a camera embodied in a tablet computer.
Provided in accordance with aspects of the present disclosure is a computer-implemented method for selective microcapsule extraction including generating microcapsules. A subset of the microcapsules contain aggregates, tissues, or at least one cell. Images of the generated microcapsules are captured. The images are provided as an input to a machine learning model. The machine learning model identifies a microcapsule containing aggregates, tissues, or at least one cell based on the provided input. A non-invasive force is selectively applied to extract the microcapsule containing the aggregates, the tissues, or the at least one cell.
In an aspect of the present disclosure, the extracted microcapsule is provided as an in-vivo treatment, for an in-vitro study, or for cell analysis.
Provided in accordance with aspects of the present disclosure is a system for selective microcapsule or droplet extraction including a non-planar core-shell microfluidic device configured to generate microcapsules or droplets defining a core-shell configuration. A subset of the microcapsules or a subset of the droplets contain aggregates, tissues, or at least one cell. A camera captures images of the generated microcapsules or droplets. Images of the generated microcapsules or droplets are provided as an input to a machine learning model. The machine learning model identifies a microcapsule or droplet containing aggregates, tissues, or at least one cell. A force generator generates a force to be applied to the microcapsules or droplets. A microcontroller selectively activates the force generator to generate the force when the detection module identifies the microcapsule or the droplet containing the aggregates, tissues, or the at least one cell to extract the microcapsule or the droplet.
Various aspects and features of the present disclosure are described hereinbelow with reference to the drawings wherein:
Descriptions of technical features or aspects of an exemplary configuration of the disclosure should typically be considered as available and applicable to other similar features or aspects in another exemplary configuration of the disclosure. Accordingly, technical features described herein according to one exemplary configuration of the disclosure may be applicable to other exemplary configurations of the disclosure, and thus duplicative descriptions may be omitted herein.
Exemplary configurations of the disclosure will be described more fully below (e.g., with reference to the accompanying drawings). Like reference numerals may refer to like elements throughout the specification and drawings.
The presently disclosed subject matter relates generally to a system and method for detection and selective extraction of microcapsules. In certain embodiments, the system and method include machine learning-based detection. In certain embodiments, the system operates on a microfluidic chip. In an example embodiment, the method includes a highly efficient machine learning based label-free on-chip detection and selective extraction method for obtaining highly pure samples of cell/tissue-laden hydrogel microcapsules.
In the present disclosure, devices, systems, and methods for a highly efficient deep learning-enabled label-free on-chip detection and selective extraction of cell aggregate-laden hydrogel microcapsules are described. This is achieved through using categorically labeled images to train a deep learning-based detection model, which is then used to dynamically analyze the real-time images for label-free detection of the cell aggregate-laden microcapsules with ˜100% efficiency. Once detected, a DEP force is activated to extract the cell-laden microcapsules from oil into an aqueous phase with high efficiency (˜97%), high purity (˜90%), and high cell viability (>95%). DEP is a simple and fast method of moving particles that allows for a quick transfer of microcapsules from oil into an aqueous phase. An exemplary system includes a microfluidic device for microcapsule generation, a cell phone camera for imaging the on-chip detection area, a deep learning model for detection via analyzing the video frames from the camera in real-time, and a microcontroller that receives the output from the deep learning model and controls the switch to activate/deactivate the DEP-based extraction. In the microfluidic device (see, e.g., microfluidic device 300 in
The microcapsules further flow into the detection region (ii) where images are taken by a cell phone camera via the objective of a low-cost Zeiss (Oberkochen, Germany) Primovert brightfield light microscope. This can be done by attaching an iPhone® cell phone (Apple Inc., CA, USA) to the microscope with the phone camera being overlayed on the microscope eyepiece. The phone also relays the images to a computer. The deep learning model on the computer analyzes the input images in real-time to determine if the microcapsule currently in the detection region contains a cell aggregate or is empty. This information is then sent to a microcontroller, which controls a switch that turns on when the model determines there is a microcapsule containing a cell aggregate in the detection region. Based on the flow speed of the oil phase, distance between two adjacent microcapsules, inference time of the detection system, and time needed for DEP activation, a distance of 10 μm can be employed between the detection region and electrode location to ensure timely extraction of a detected cell aggregate-laden microcapsules with minimal interference of neighboring microcapsules. When the switch is turned on, an electric field is applied across the microchannel via the two electrodes (E1 and E2, located at 10 μm downstream of the detection region) to generate DEP force for selectively extracting the cell aggregate-laden microcapsule from the oil phase into the isotonic aqueous extraction solution (iii). The extracted microcapsules then flow to the outlet 1 (O1, iv), while non-extracted microcapsules stay in the oil phase and flow to the outlet O2. The microcapsules have a diameter of 219.4+/−8.2
A deep learning model can be utilized to enable label-free detection of cell aggregate-laden microcapsules in real-time. This is achieved through training the deep learning neural network model using pre-labeled (i.e., with or without a cell-laden microcapsule) images of the detection region. Once the model is trained, images from the cell phone camera showing the detection region of the microfluidic chip are read into the detection program and the model determines whether or not there is an aggregate-laden microcapsule in the detection region in real-time.
An exemplary detection model is based on the single shot multibox detector (SSD), a current state-of-the-art model for object detection. The detection model includes two components: a backend feature extractor, followed by several convolutional layers for bounding box predictions. The predicted bounding boxes are refined through non-maximum suppression. A comparison between three different backend structures is described (see, e.g.,
When real-time extraction is performed on cell aggregate-laden microcapsules, enabled by the deep learning-based label-free detection of the cell aggregate-laden microcapsules on-chip using the detection model trained with the MobileNet backend structure, the efficiency and purity of the approach described herein for both detection and selective extraction through microcapsule collection and counting, as well as quantification of cell viability is increased. A video frame breakdown of the deep learning-based detection and selective extraction is shown in
The deep learning-based label-free methods described herein can detect cell aggregates (50-250 μm in diameter) with an ˜100% detection efficiency. To determine the selective extraction efficiency and purity as well as cell viability, microcapsules are collected from both the aqueous outlet O1 and oil outlet O2. The extraction efficiency is defined as the percent of extracted cell aggregate laden microcapsules out of all cell aggregate-ladened microcapsules, while the extraction purity is defined as the percent of the extracted cell aggregate-laden microcapsules out of the total extracted microcapsules. The purity of the selectively extracted microcapsules is significantly much higher than that (˜2%) without selective extraction.
The deep learning-based label-free method can detect cell aggregates of 50-250 μm in diameter with an ˜100% detection efficiency, which enables selective extraction of the cell aggregate laden microcapsules via DEP force with an ˜97% extraction efficiency. This high detection efficiency can also be attributed to the size of the microcapsules and the design of the devices and systems described herein, to ensure the cell aggregates are not far away from the plane of focus (i.e., within the depth of focus) so that they show up in the images used by the deep-learning algorithm for detection. This detection method is much better than a previously reported optical sensor-based approach that is unable to detect or extract any cell aggregates less than 82 μm. This is important for biomedical applications, for instance for islet microencapsulation, because islets can be as small as 50 μm. The purity (˜90%) of the deep learning-based extraction is also much higher than that (˜30%) achieved with an optical sensor-based detection method for DEP-based extraction.
The throughput of the devices, systems, and methods described herein is ˜1.5 microcapsules per second. This is due to the flow rates of the aqueous and oil phases used, which are optimized based on the time needed for detection and extraction, as well as the time needed for the oil/aqueous interface to become stable after extracting a cell-laden microcapsule. If the rate of microcapsule generation is too high and the microcapsules are too close to each other, the purity of the extracted cell-laden microcapsules may decrease. This is because the neighboring microcapsules (cell-laden or not) may be extracted either before the DEP is de-activated, or before the interface becomes stable after extracting a target cell-laden microcapsule. This may contribute partially to the low purity (˜30%) of conventional optical sensor-based systems with a throughput of ˜3.75 microcapsules per second. Nonetheless, for the applications of microencapsulating islets and follicles, usually ˜100 follicles or 1000 islets are needed at a time, making this throughput of ˜1.5 microcapsules per second sufficient. For applications that require higher throughput, smaller microcapsules that may cause less interface destabilization could be used, along with more viscous oil phase and aqueous extraction solution to further stabilize the interface, for increasing the throughput while keeping a high purity. The electrical conductivity of the oil phase can also be adjusted for faster extraction. Advances in deep learning and improved backend structures, along with a high-speed camera for imagining, and a faster computer processor and microcontroller can also increase throughput. Throughput may also be increased by running multiple microfluidic devices in parallel.
The deep learning system can also be trained with images of smaller aggregates (e.g., smaller than 50-250 μm for the application of microencapsulating single pancreatic islets and ovarian follicles that have a diameter over the same range) or single cells. The microfluidic device and detection system can also be adjusted to ensure that smaller aggregates and single cells can be identified and are within the depth of focus for imaging so that a high detection and extraction efficiency could be achieved. Reducing the microcapsule size may also help to increase detection efficiency, especially for detecting smaller aggregates or one single cell in the microcapsules. The position of cells, the plane of focus, and microcapsule edge opacity may vary with microcapsule size and can affect the detection efficiency. This type of model can also be applied to sort cell aggregates based on their size by training the model with images of aggregates of varying sizes in the microcapsules.
The elimination of cell labeling with the label-free devices, systems and methods described herein is of great significance to the use of the cells for downstream biomedical applications where labeled cells cannot be used, including treating type 1 diabetes with microencapsulated islets, as well as microencapsulation of an ovarian follicle for biomimetic 3D culture to treat infertility. The devices, systems, and methods described herein allow for a quick transfer of microcapsules from the time of detection to extraction (˜15 ms), and microcapsules are moved from the oil phase that is not favorable for the survival of living cells, to an aqueous solution in less than 10 seconds (determined by dividing the gelling microchannel length with the flow speed in the channel), from the time when the microcapsules are generated to their extraction. This contributes to the high cell viability of the extracted sample. This approach eliminates the need for tedious manual sorting of non-labeled aggregates and the associated possibility of sample contamination.
Referring generally to
Referring particularly to
Referring particularly to
Referring particularly to
The microfluidic device (see, e.g., microfluidic device 300 in
A deep learning-based detection program processes images of the microcapsules received from a camera (e.g., camera 102) to determine if the microcapsules contain a cell aggregate or are empty. Once a cell aggregate-laden microcapsule is detected, the microcontroller (e.g., microcontroller 105) is informed to turn the switch (e.g., switch 209) on, activating a force (e.g., a DEP force) to extract the cell aggregate-laden microcapsule into the aqueous extraction channel (iii in
Referring particularly to
A camera 102 captures images of the microcapsules. The images may be digital images and may include video images. The camera 102 may be a digital camera, a high speed digital camera, a camera embodied in a smartphone, or a camera embodied in a tablet computer.
A detection module 103 includes a processor and a memory. The memory includes instructions that when executed by the processor cause the detection module 103 to provide the images of the microcapsules as an input to a machine learning model. The detection module may run on any computer, as described herein (see, e.g., the computer 1400 described in more detail below with reference to
The machine learning model identifies microcapsules containing aggregates, tissues, or at least one cell. The machine learning model may include a machine learning classifier, or a convolutional neural network (see, e.g.,
A force generator 104 generates a force to extract the microcapsules. A microcontroller 105 selectively activates the force generator 104 to generate the force when the detection module 103 identifies a microcapsule containing aggregates, tissues, or at least one cell to extract the microcapsule. The force may be a non-invasive force.
In an aspect of the present disclosure, the force generated by the force generator 104 includes an electrical force, an acoustic force, or a mechanical force.
Referring particularly to
Referring particularly to
In an aspect of the present disclosure, the microcapsule 510 generated by the non-planar core-shell microfluidic device 500 defines a biomimetic environment containing a plurality of single cells, a plurality of cell tissues, and a plurality of cell aggregates. The biomimetic environment may also contain a single cell. The biomimetic environment may include at least one hydrogel surrounding the aggregates, tissues, or cell(s).
The microfluidic device 500 includes at least two immiscible phases (e.g., an oil phase 501 and an aqueous phase 502). The force generator is selectively activated by the microcontroller to extract microcapsules from a first phase to a second phase (e.g., by activating electrodes 503, 504).
Referring particularly to
In an aspect of the present disclosure, the extracted microcapsule 510 includes ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells. The ovarian follicle cells, pancreatic islet cells, stem cells, or somatic cells are arranged as a single cell, multiple cells, or cell aggregates.
Referring to
Referring to
Referring to
The system 1000 described below with reference to
Referring to
Referring particularly to
Referring to
With ongoing reference to
Referring to
In an aspect of the present disclosure, the extracted microcapsule is provided as an in-vivo treatment, for an in-vitro study, or for cell analysis.
Referring to
In some aspects of the disclosure, the memory 1402 can be random access memory, read-only memory, magnetic disk memory, solid-state memory, optical disc memory, and/or another type of memory. The memory 1402 can communicate with the processor 1401 through communication buses 1403 of a circuit board and/or through communication cables such as serial ATA cables or other types of cables. The memory 1402 includes computer-readable instructions that are executable by the processor 1401 to operate the computer 1400 to execute the algorithms described herein. The computer 1400 may include a network interface 1404 to communicate (e.g., through a wired or wireless connection) with other computers or a server. A storage device 1405 may be used for storing data. The computer 1400 may include one or more FPGAs 1406. The FPGA 1406 may be used for executing various machine learning algorithms. A display 1407 may be employed to display data processed by the computer 1400.
The machine learning models described herein (e.g., including a neural network) can be trained using a dataset including known matching and non-matching entries (e.g., data of previously verified microcapsules including aggregates, tissues, or at least one cell and verified microcapsules not including any cells, aggregates or tissues). The machine learning models may be trained on datasets for multiple cells or aggregates of cells contained in a core-shell microcapsule, as described herein.
An exemplary training process for the deep-learning model is described in more detail below. The deep learning model can be trained using labeled images of empty and cell aggregate-laden microcapsules. Images of air bubbles and noise in the oil phase are included to help the model distinguish between noise, microcapsules, and cell aggregates. First, iVcam is used to record videos of microcapsules in the detection region using an iPhone® attached to the eyepiece of a Zeiss Primovert microscope. Then the videos are split into frames, and frames with empty microcapsules and cell aggregate-laden microcapsules are collected (400 empty and 400 aggregate-laden). Images are cropped to include only one microcapsule and labeled as “empty” or “cell aggregate-laden” using the program LabelImg in the Python® Package Index (PyPI). Labeled image data is then divided randomly into training and testing data (80% training, 20% testing). Images are used to train the deep learning models using Tensorflow® (Google) for 100,000 steps. Testing data is then used to confirm the model's detection precision.
The documents listed below and referenced herein are incorporated herein by reference in their entireties, except for any statements contradictory to the express disclosure herein, subject matter disclaimers or disavowals, and except to the extent that the incorporated material is inconsistent with the express disclosure herein, in which case the language in this disclosure controls. Incorporation by reference of the following shall not be considered an admission by the applicant that the incorporated materials are prior art to the present disclosure, nor shall any document be considered material to patentability of the present disclosure.
J. Zhang, R. J. Coulston, S. T. Jones, J. Geng, O. A. Scherman, C. Abell, Science (80-.). 2012, 335, 690.
A. S. Mao, B. Ozkale, N. J. Shah, K. H. Vining, T. Descombes, L. Zhang, C. M. Tringides, S. W. Wong, J. W. Shin, D. T. Scadden, D. A. Weitz, D. J. Mooney, Proc. Natl. Acad. Sci. U.S.A. 2019, 116, 15392.
A. J. Vegas, O. Veiseh, M. Gürtler, J. R. Millman, F. W. Pagliuca, A. R. Bader, J. C. Doloff, J. Li, M. Chen, K. Olejnik, H. H. Tam, S. Jhunjhunwala, E. Langan, S. Aresta- Dasilva, S. Gandham, J. J. McGarrigle, M. A. Bochenek, J. Hollister-Lock, J. Oberholzer, D. L. Greiner, G. C. Weir, D. A. Melton, R. Langer, D. G. Anderson, Nat. Med. 2016, 22, 306.
X. He, ACS Biomater. Sci. Eng. 2017, 3, 2692.
W. Zhang, X. He, J. Healthc. Eng. 2011, 2, 427.
P. Agarwal, J. K. Choi, H. Huang, S. Zhao, J. Dumbleton, J. Li, X. He, Part. Part. Syst. Charact. 2015, 32, 809.
J. K. Choi, P. Agarwal, H. Huang, S. Zhao, X. He, Biomaterials 2014, 35, 5122.
M. Ma, A. Chiu, G. Sahay, J. C. Doloff, N. Dholakia, R. Thakrar, J. Cohen, A. Vegas, D. Chen, K. M. Bratlie, T. Dang, R. L. York, J. Hollister-Lock, G. C. Weir, D. G. Anderson, Adv. Healthc. Mater. 2013, 2, 667.
A. M. White, J. G. Shamul, J. Xu, S. Stewart, J. S. Bromberg, X. He, ACS Biomater. Sci. Eng. 2020, 6, 2543.
S. Zhao, Z. Xu, H. Wang, B. E. Reese, L. V Gushchina, M. Jiang, P. Agarwal, J. Xu, M. Zhang, R. Shen, Z. Liu, N. Weisleder, X. He, Nat. Commun. 2016, 7, 1.
R. Seemann, M. Brinkmann, T. Pfohl, S. Herminghaus, Reports Prog. Phys. 2012, 75, 016601.
D. M. Headen, G. Aubry, H. Lu, A. J. Garcia, Adv. Mater. 2014, 26, 3003.
L. Shang, Y. Cheng, Y. Zhao, Chem. Rev. 2017, 117, 7964.
H. Huang, X. He, Lab Chip 2015, 15, 4197.
K. Y. Lee, D. J. Mooney, Prog. Polym. Sci. 2012, 37, 106.
D. J. Collins, A. Neild, A. DeMello, A. Q. Liu, Y. Ai, Lab Chip 2015, 15, 3439.
M. De Groot, B. J. De Haan, P. P. M. Keizer, T. A. Schuurs, R. Van Schilfgaarde, H. G. D. Leuvenink, Lab. Anim. 2004, 38, 200.
D. Dufrane, W. D'hoore, R. M. Goebbels, A. Saliez, Y. Guiot, P. Gianello,
Xenotransplantation 2006, 13, 204.
X. He, T. L. Toth, Semin. Cell Dev. Biol. 2017, 61, 140.
M. Sun, P. Durkin, J. Li, T. L. Toth, X. He, ACS Sensors 2018, 3, 410.
H. Wang, P. Agarwal, B. Jiang, S. Stewart, X. Liu, Y. Liang, B. Hancioglu, A. Webb, J. P. Fisher, Z. Liu, X. Lu, K. H. R. Tkaczuk, X. He, Adv. Sci. 2020, 7, 2000259.
H. Huang, M. Sun, T. Heisler-Taylor, A. Kiourti, J. Volakis, G. Lafyatis, X. He, Small 2015, 11, 5369.
J. Nam, H. Lim, C. Kim, J. Yoon Kang, S. Shin, Biomicrofluidics 2012, 6, 024120.
A. Sciambi, A. R. Abate, Lab Chip 2015, 15, 47.
X. He, Ann. Biomed. Eng. 2017, 45, 1676.
P. R. O'Neill, W. K. A. Karunarathne, V. Kalyanaraman, J. R. Silvius, N. Gautama, Proc. Natl. Acad. Sci. U.S.A. 2012, 109, 20784.
D. R. Gossett, H. T. K. Tse, J. S. Dudani, K. Goda, T. A. Woods, S. W. Graves, D. Di Carlo, Small 2012, 8, 2757.
E. Pariset, C. Pudda, F. Boizot, N. Verplanck, J. Berthier, A. Thuaire, V. Agache, Small 2017, 13, DOI 10.1002/sm11.201770201.
T. S. H. Tran, B. D. Ho, J. P. Beech, J. 0. Tegenfeldt, Lab Chip 2017, 17, 3592.
E. H. M. Wong, E. Rondeau, P. Schuetz, J. Cooper-White, Lab Chip 2009, 9, 2582.
J. J. Agresti, E. Antipov, A. R. Abate, K. Ahn, A. C. Rowat, J. C. Baret, M. Marquez, A. M. Klibanov, A. D. Griffiths, D. A. Weitz, Proc. Natl. Acad. Sci. U.S.A. 2010, 107, 4004.
Z. Cao, F. Chen, N. Bao, H. He, P. Xu, S. Jana, S. Jung, H. Lian, C. Lu, Lab Chip 2013, 13, 171.
S. Webb, Nature 2018, 554, 555.
X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, A. Ozcan, Science (80-.). 2018, 361, 1004.
E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O'Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, S. Finkbeiner, Cell 2018, 173, 792.
Y. Wu, H. Shroff, Nat. Methods 2018, 15, 1011.
P. Zhang, S. Liu, A. Chaurasia, D. Ma, M. J. Mlodzianoski, E. Culurciello, F. Huang, Nat. Methods 2018, 15, 913.
A. S. Adamson, A. Smith, JAMA Dermatology 2018, 154, 1247.
T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, C. L. Zitnick, in Lect. Notes Comput. Sci. (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), 2014, pp. 740-755.
J. Schmidhuber, Neural Networks 2015, 61, 85.
Y. LeCun, Y. Bengio, G. Hinton, Nature 2015, 521, 436.
Y. J. Heo, D. Lee, J. Kang, K. Lee, W. K. Chung, Sci. Rep. 2017, 7, 1.
Z. Zhang, J. Ge, Z. Gong, J. Chen, C. Wang, Y. Sun, Int. J. Lab. Hematol. 2020, DOI 10.1111/ijlh.13380.
V. Anagnostidis, B. Sherlock, J. Metz, P. Mair, F. Hollfelder, F. Gielen, Lab Chip 2020, 20, 889.
M. Girault, H. Kim, H. Arakawa, K. Matsuura, M. Odaka, A. Hattori, H. Terazono, K. Yasuda, Sci. Rep. 2017, 7, DOI 10.1038/srep40072.
A. Chu, D. Nguyen, S. S. Talathi, A. C. Wilson, C. Ye, W. L. Smith, A. D. Kaplan, E. B. Duoss, J. K. Stolaroff, B. Giera, Lab Chip 2019, 19, 1808.
J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, K. Murphy, in Proc.—30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, 2017, pp. 3296-3305.
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, A. C. Berg, in Lect. Notes Comput. Sci. (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), 2016, pp. 21-37.
A. Mousavian, D. Anguelov, J. Košecká, J. Flynn, in Proc.—30th IEEE Conf. Comput.
Vis. Pattern Recognition, CVPR 2017, 2017, pp. 5632-5640.
S. Ren, K. He, R. Girshick, J. Sun, in Adv. Neural Inf. Process. Syst., 2015, pp. 91-99.
A. Neubeck, L. Van Gool, in Proc. - Int. Conf. Pattern Recognit., 2006, pp. 850-855.
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, arXiv 2017, 1704.04861.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2015, pp. 1-9.
K. He, X. Zhang, S. Ren, J. Sun, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770-778.
A. C. Schapiro, T. T. Rogers, N. I. Cordova, N. B. Turk-Browne, M. M. Botvinick, in ArXiv Prepr., 2016.
J. Jo, Y. C. Moo, D. S. Koh, Biophys. J. 2007, 93, 2655.
H. Huang, X. He, Appl. Phys. Lett. 2014, 105, 143704.
MicroChem, “SU-8 2000 (2000.5-2015) Permanent Epoxy Negative Photoresist PROCESSING GUIDELINES FOR: SU-8 2100 and SU-8 2150,” can be found under www.atgc.cajp, 2010.
It will be understood that various modifications may be made to the aspects and features disclosed herein. Therefore, the above description should not be construed as limiting, but merely as exemplifications of various aspects and features. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended thereto.
The present application claims priority to U.S. Provisional Patent Application No. 63/177,297 filed on Apr. 20, 2021, the entire contents of which are incorporated by reference herein.
This invention was made with government support under R01EB023632B awarded by the National Institutes of Health. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
63177297 | Apr 2021 | US |