The technical field generally relates to devices and methods for label-free aerosol particle sensing/screening using a portable or mobile microscope that uses trained deep neural networks to reconstruct images of the captured aerosols as well as classify or label the aerosol particles. The devices and methods have particular applicability to bio-aerosols.
A human adult inhales about seven liters of air every minute, which on average contains 102-103 micro-biological cells (bio-aerosols). In some contaminated environments, this number can easily exceed 106 bio-aerosols. These bio-aerosols include micro-scale airborne living organisms that originate from plants or animals, and include pollens, mold/fungi spores, bacteria, and viruses. Bio-aerosols are generated both naturally and anthropogenically, from e.g. animal houses, composting facilities, construction sites, and other human and animal activities. These bio-aerosols can stay suspended in the air for prolonged periods of time, remain at significant concentrations even far away from the generating site (up to one kilometer), and can even travel through continental distances. Basic environmental conditions, such as temperature and moisture level, can also considerably influence bio-aerosol formation and dispersion. Inhaled by a human, they can stay in the respiratory tract and cause irritation, allergies, various diseases including cancer and even premature death. In fact, bio-aerosols account for 5-34% of the total amount of indoor particulate matter (PM). In recent years, there has been increased interest in monitoring environmental bio-aerosols, and understanding their composition, to avoid and/or mitigate their negative impacts on human health, in both peacetime and in threat of biological attacks.
Currently, most of the bio-aerosol monitoring activities still rely on a technology that was developed more than fifty years ago. In this method, an aerosol sample is taken at the inspection site using a sampling device such as an impactor, a cyclone, a filter, or a spore trap. This sample is then transferred to a laboratory, where the aerosols are transferred to certain liquid media or solid substrates and inspected manually under a microscope or through culture experiments. The microscopic inspection of the sample usually involves labeling through a colorimetric or fluorescence stain(s) to increase the contrast of the captured bio-aerosols under a microscope. Regardless of the specific method that is employed, the use of manual inspection in a laboratory, following a field collection, significantly increases the costs and delays the reporting time of the results. Partially due to these limitations, out of ˜10,000 air-sampling stations worldwide, only a very small portion of them have bio-aerosol sensing/measurement capability. Even in developed countries, bio-aerosol levels are only reported on a daily basis at city scales. As a result, human exposure to bio-aerosols is hard to quantify with the existing set of technologies.
Driven by this need, different techniques have been emerging towards potentially label-free, on-site and/or real-time bio-aerosol monitoring. In one of these techniques, the air is driven through a small channel, and an ultraviolet (UV) source is focused on a nozzle of this channel, exciting the auto-fluorescence of each individual bio-aerosol flowing through the nozzle. This auto-fluorescence signal is then captured by one or more photodetectors, used to differentiate bio-aerosols from non-fluorescent background aerosols. Recently, other machine learning algorithms have also been applied to classify bio-aerosols from their auto-fluorescence signals using a UV-LIF spectrophotometer. However, measuring auto-fluorescence in itself may not provide sufficient specificity towards classification. To detect weak auto-fluorescence signals, this design also requires strong UV sources, sensitive photodetectors and high-performance optical components, making the system relatively costly and bulky. Furthermore, the sequential read-out scheme in these flow-based designs also limits their sampling rate and throughput to <5 L/min. Alternative bio-aerosol detection methods rely on anti-bodies to specifically capture bio-aerosols of interest on e.g., a vibrational cantilever, or a surface plasmon resonance (SPR) substrate, which can then detect these captured bio-aerosols through a change in the cantilever vibrational frequency or a shift in the SPR spectrum, respectively. While these approaches provide very sensitive detection of a specific type of bio-aerosols, however, their performance can be compromised by non-specific binding and/or changes in the environmental conditions (e.g., temperature, moisture level, etc.), impacting the effectiveness of the surface chemistry. Moreover, the reliance to specific antibodies makes it harder for these approaches to scale up the number of target bio-aerosols and cover unknown targets. Bio-aerosol detection and composition analysis using Raman spectroscopy has also been demonstrated. However, due to weaker signal levels and contamination from background spectra, the sensitivities of these methods have been relatively low despite their expensive and bulky hardware; it is challenging to analyze or detect e.g., a single bio-aerosol within a mixture of other aerosols. It is also possible to detect bio-aerosols by detecting their genetic material (e.g., DNA), using polymerase chain reaction (PCR), enzyme-linked immunosorbent assays (ELISA) or metagenomics, all of which can provide high sensitivity and specificity. However, these detection methods are usually based on post-processing of bio-aerosols in laboratory environments (i.e., involves field sampling, followed by the transportation of the sample to a central laboratory for advanced processing), and are therefore low-throughput, also requiring an expert and the consumption of costly reagents. Therefore, there is still an urgent unmet need for accurate, label-free and automated bio-aerosol sensing to cover a wide range of bio-aerosols, ideally within a field-portable, compact and cost-effective platform.
The device, in one embodiment, uses a combination of an impactor and a lens-less digital holographic on-chip microscope: bio-aerosol particles in air are captured on the impactor substrate at a sampling rate of 13 L/min. These collected bio-aerosol particles generate diffraction holograms recorded directly by an image sensor that is positioned right below the substrate (which is optically transparent). Each hologram contains information of the complex optical field, and therefore both the amplitude and phase information of each individual bio-aerosol are captured. After digital holograms of the bio-aerosol particles are acquired and transmitted to a computing device such as remote server (or a local PC, tablet, portable electronic device such as Smartphone), these holograms are rapidly processed through an image-processing pipeline (with image processing software), within a minute, reconstructing the entire field-of-view (FOV) of the device, i.e., 4.04 mm2, over which the captured bio-aerosol particles are analyzed. Enabled by trained deep neural networks (implemented as convolutional neural networks (CNNs) in one preferred embodiment), the reconstruction algorithm first reconstructs both the amplitude and phase image of each individual bio-aerosol particle with sub-micron resolution, and then performs automatic classification of the imaged bio-aerosol particles into pre-trained classes and counting the density of each class in air (additional information or parameters of the bio-aerosol particles may also be output as well). To demonstrate the effectiveness of the device and method, the reconstruction and label-free sensing of five different types of bio-aerosols was performed: Bermuda grass pollen, oak tree pollen, ragweed pollen, Aspergillus spore, and Alternaria spore—as well as non-biological aerosols as part of the default background pollution. The Bermuda grass, oak tree and ragweed pollens have long been recognized as some of the most common grass, tree and weed-based allergens that can cause severe allergic reactions. Similarly, the Aspergillus and Alternaria spores are two of the most common mold spores found in air and can cause allergic reactions and various diseases. Furthermore, Aspergillus spores have been proven to be a culprit of asthma in children. Some of these mold species/sub-species can also generate mycotoxins that weaken the human immune system. The trained deep neural network (i.e., a trained CNN) is trained to differentiate these six different types of aerosol particles, achieving an accuracy of 94% using the mobile instrument. This label-free bio-sensing platform can be further scaled up to specifically detect other types of bio-aerosols by training it using purified populations of new target object types as long as these bio-aerosol particles exhibit unique spatial and/or spectral features that can be detected through the holographic imaging system.
This platform enables the automated label-free sensing and classification of bio-aerosols using a portable and cost-effective device, which is enabled by computational microscopy and deep-learning, which are used for both image reconstruction and particle classification. The mobile bio-aerosol detection device is hand-held, weighs less than 600 g, and its parts cost less than $200 under low-volume manufacturing. Compared to earlier results on PM measurements using mobile microscopy without any classification capability, this platform enables label-free and automated bio-aerosol sensing using deep learning (which is used for both image reconstruction and classification), providing a unique capability for specific and sensitive detection and counting of e.g., pollen and mold particles in air. The platform can find a wide range of applications in label-free aerosol sensing and environmental monitoring. This may include bio-aerosols and non-biological aerosols.
In one embodiment, a method of classifying aerosol particles using a portable microscope device includes capturing aerosol particles on a substrate. The substrate is optically transparent and tacky or sticky so that aerosol particles adhere thereto. One or more illumination sources in the portable microscope device then illuminate the substrate containing the captured aerosol particles. An image sensor disposed in the portable disposed microscope device and adjacent to the substrate then captures holographic images or diffraction patterns of the captured aerosol particles. The image files generated by the image sensor are then processed with image processing software contained on a local or remote computing device, wherein image processing comprises inputting the holographic images or diffraction patterns through a first trained deep neural network to output reconstructed amplitude and phase images of each aerosol particle at the one or more illumination wavelengths and wherein a second trained deep neural network receives as an input the outputted reconstructed amplitude and phase images of each aerosol particle at the one or more illumination wavelengths and outputs one or more of the following for each aerosol particle: a classification or label of the type of aerosol particle, a classification or label of the species of the aerosol particle, a size of the aerosol particle, a shape of the aerosol particle, a thickness of the aerosol particle, and a spatial feature of the particle.
In another embodiment, a system for classifying aerosol particles includes a portable, lens-free microscopy device for monitoring air quality. The device includes a housing, a vacuum pump configured to draw air into an impaction nozzle disposed in the housing, the impaction nozzle having an output located adjacent to an optically transparent substrate for collecting particles contained in the air, one or more illumination sources disposed in the housing and configured to illuminate the collected particles on the optically transparent substrate, and an image sensor disposed in the housing and located adjacent to the optically transparent, wherein the image sensor collects diffraction patterns or holographic images cast upon the image sensor by the collected particles. The system includes a computing device having one or more processors executing image processing software thereon and configured to receive the holographic images or diffraction patterns obtained from the portable, lens-free microscopy device, wherein the image processing software inputs the holographic images or diffraction patterns obtained at the one or more illumination wavelengths through a first trained deep neural network to output reconstructed amplitude and phase images of each aerosol particle and inputs the reconstructed amplitude and phase images of each aerosol particle in a second trained deep neural network and outputs one or more of the following for each aerosol particle: a classification or label of the type of aerosol particle, a classification or label of the species of the aerosol particle, a size of the aerosol particle, a shape of the aerosol particle, a thickness of the aerosol particle, and a spatial feature of the particle.
The air sampler assembly 22 contains an image sensor 24 (seen in
The air sampler assembly 22 further includes an impaction nozzle 30 (seen in
The optically transparent substrate 34 is located immediately adjacent to the image sensor 24. That is to say the airstream-facing surface of the optically transparent substrate 34 is located less than about 10 mm and in other embodiments less than about 5 mm from the active surface of the image sensor 24 in some embodiments. In other embodiments, the airstream-facing surface of the optically transparent substrate 34 is located less than 4 mm, 3 mm, 2 mm, and in a preferred embodiment, less than 1 mm. In one embodiment, the optically transparent substrate 34 is placed directly on the surface of the image sensor 24 to create a distance of around 400 μm between the particle-containing surface of the optically transparent substrate 34 and the active surface of the image sensor 24. The particle-containing surface of the optically transparent substrate 34 is also located close to the impaction nozzle 30, for example, around 800 μm in one embodiment. Of course, other distances could be used provided that holographic images and/or diffraction patterns of captured particles 100 can still be obtained with the image sensor 24.
Referring to
The lens-free microscope device 10 includes one or more processors 50 (
The one or more processors 50, the one or more illumination sources 40, and the vacuum pump 14 are powered by an on-board battery 54 as seen in
The image processing software 66 can be implemented in any number of software packages and platforms (e.g., Python, TensorFlow, MATLAB, C++, and the like). A first trained deep neural network 70 is executed by the image processing software 66 and is used to output or generate reconstructed amplitude and phase images of each aerosol particle 100 that were illuminated by the one or more illumination sources 40. As seen in
The classification or label output 110 that is generated for each aerosol particle 100 may include the type of particle 100. Examples of different “types” that may be classified using the second trained deep neural network 72 may, in some embodiments, include higher level classification types such as whether the particle 100 was organic or inorganic. Additional types contemplated by the “type” that is output by the second trained deep neural network 72 may include whether the particle 100 was plant or animal. Additional examples of “types” that can be classified include a generic type for the particle 100. Exemplary types that can be output for the particles 100 include classifying particles 100 as pollen, mold/fungi, bacteria, viruses, dust, dirt. In other embodiments, the second trained deep neural network 72 outputs even more specific type information for the particles 100. For example, rather than merely identify a particle 100 as pollen, the second trained deep neural network 72 may output the exact source or species of the pollen (e.g., Bermuda grass pollen, oak tree pollen, ragweed pollen). The same is true for other particles types (e.g., Aspergillus spores, Alternaria spores).
The second trained deep neural network 72 may also output other information or parameter(s) for each of the particles 100. This information may include a label or other indicia that is associated with each particle 100 (e.g., appended to each identified particle 100). This other information or parameter(s) beyond particle classification data (type or species) may include a size of the aerosol particles (e.g., mean or average diameter or other dimension), a shape of the aerosol particle (e.g., circular, oblong, irregular, or the like), a thickness of the aerosol particle, and a spatial feature of the particle (e.g., maximum intensity, minimum intensity, average intensity, area, maximum phase).
The image processing software 66 may be broken into one or more components or modules with, for example, reconstruction being performed by one module (the runs the first trained deep neural network 70) and another module (the runs the second trained deep neural network 72) performing the deep learning classification. The computing device 52 may include a local computing device 52 that is co-located with the lens-free microscope device 10. An example of a local computing device 52 may include a personal computer, laptop, or tablet PC or the like. Alternatively, the computing device 52 may include a remote computing device 52 such as a server or the like. In the later instance, image files obtained from the image sensor 24 may be transmitted to the remote computing device 52 using a Wi-Fi or Ethernet connection. Alternatively, image files may be transferred to a portable electronic device first which are then relayed or re-transmitted to the remote computing device 52 using the wireless functionality of the portable electronic device 62 (e.g., Wi-Fi or proprietary mobile phone network). The portable electronic device may include, for example, a mobile phone (e.g., Smartphone) or a tablet PC or iPad®. In one embodiment, the portable electronic device 62 may include an application or “app” 64 thereon that is used to interface with the lens-free microscope device 10 and display and interact with data obtained during testing. For example, the application 64 of the portable electronic device 62 may be used to control various operations of the lens-free microscope device 10. This may include controlling the vacuum pump 14, capturing image sequences, and display of the results (e.g., display of images of the particles 100 and classification results 110 for the particles 100).
Results and Discussion
Quantification of Spatial Resolution and Field-of-View
A USAF-1951 resolution test target is used to quantify the spatial resolution of the device 10.
In the design of the tested device 10, the image sensor 24 (i.e., image sensor chip) has an active area of 3.674 mm×2.760 mm=10.14 mm2, which would normally be the sample FOV for a lens-less on-chip microscope. However, the imaging FOV is smaller than this because the sampled aerosol particles 100 deposit directly below the impaction nozzle 30, thus the active FOV of the mobile device 10 is defined by the overlapping area of the image sensor 24 and the impactor nozzle 30, which results in an effective FOV of 3.674 mm×1.1 mm=4.04 mm2. This FOV can be further increased up to the active area of the image sensor 24 by customizing the impactor design with a larger nozzle 30 width.
Label-Free Bio-Aerosol Image Reconstruction
For each bio-aerosol measurement, two holograms are taken (before and after sampling the air) by the mobile device 10, and their per-pixel difference is calculated forming a differential hologram as described below. This differential hologram is numerically back-propagated in free space by an axial distance of ˜750 μm to roughly reach the object plane of the sampling surface of the transparent substrate 34. This axial propagation distance does not need to be precisely known, and in fact all the aerosol particles 100 within this back-propagated image are automatically autofocused and phase recovered at the same time using the first deep neural network 700 that was trained with out-of-focus holograms of particles (within +/−100 μm of their corresponding axial position) to extend the depth-of-field (DOF) of the reconstructions (see e.g.,
To illustrate the reconstruction performance of this method,
The neural network outputs (
Bio-Aerosol Image Classification
A separate trained deep neural network 72 (e.g., convolutional neural network (CNN)) is used that takes a cropped ROI (after the image reconstruction and auto-focusing step detailed earlier) and automatically assigns one of the six class labels for each detected aerosol particle 100 (see
Alternaria
Aspergillus
As shown in Table 1, an average precision of ˜94.0%, and an average recall of ˜93.5% are achieved for the six labels using this trained classification deep neural network 72 for a total number of 1,391 test particles 100 that were imaged by the device 10. In Table 1, the classification performance of the mobile device 10 is relatively lower for Aspergillus spores compared to other classes. This is due to the fact that (1) Aspergillus spores are smaller in size (˜4 μm), so their fine features may not be well-revealed under the current imaging system resolution, and (2) the Aspergillus spores sometimes cluster and may exhibit a different shape compared to an isolated spore (for which the network 72 was trained for). In addition to these, the background dust images used in this testing are captured along the major roads with traffic. Although it should contain mostly non-biological aerosol particles 100, there is a finite chance that a few bio-aerosol particles 100 may also be present in the data set, leading to mislabeling.
Table 1 also compares the performance of two other classification methods on the same data set, namely AlexNet and support vector machine (SVM). AlexNet, although has more trainable parameters in the network design (because of the larger fully connected layers), performs ˜1.8% worse in precision and 1.2% worse in recall compared to the CNN 72 described herein. SVM, although very fast to compute, has significantly worse performance than the CNN models, reaching only 78.1% precision and 73.2% recall on average for the testing set.
Bio-Aerosol Mixture Experiments
To further quantify the label-free sensing performance of the device 10, two additional sets of experiments were undertaken—one with a mixture of the three pollens, and another with a mixture of the two mold spores. In addition, in each experiment there were also unavoidably dust particles (background PM) other than the pollens and mold spores that were introduced into the device 10 and were sampled and imaged on the detection substrate 34.
To quantify the performance of the device 10, the sampled sticky substrate 34 in each experiment was also examined (after lens-less imaging) by a microbiologist under a scanning microscope with 40× magnification, where the corresponding FOV that was analyzed by the mobile device 10 was scanned and the captured bio-aerosol particles 100 inside each FOV were manually labeled and counted by a microbiologist (for comparison purposes). The results of this comparison are shown in
To further quantify detection accuracy,
Field Sensing of Oak Tree Pollens
The detection of oak pollens in the field using the mobile device 10. In the Spring of 2018, the device 10 was used to measure bio-aerosol particles 100 in air close to a line of four oak trees (Quercus Virginiana) at the University of California, Los Angeles campus. A three-minute air sample is taken close to these trees at a pumping rate of 13 L/min, as illustrated in
The entire FOV was also evaluated to screen for the false negative detections of oak tree pollen particles 100. Of all the detected bio-aerosol particles 100, it was seen that the CNN neural network 72 missed one cluster of oak tree pollens 100 within the FOV, as marked by a rectangle R in
The mobile bio-aerosol sensing device 10 is hand-held, cost-effective and accurate. It can be used to build a wide-coverage automated bio-aerosol monitoring network in a cost-effective and scalable manner, which can rapidly provide accurate response for spatio-temporal mapping of bio-aerosol particle 100 concentrations. The device 10 may be controlled wirelessly and can potentially be carried by unmanned vehicles such as drones to access bio-aerosol monitoring sites that may be dangerous for human inspectors.
Methods
Computational-Imaging-Based Bio-Aerosol Monitoring
To perform label-free sensing of bio-aerosol particles 100, a computational air quality monitor based on lens-less microscopy was developed.
A driver chip (TLC5941NT, Texas Instruments) controls the current of the illumination VCSEL 40 at its threshold (3 mA), which provides adequate coherence without introducing speckle noise. 850 nm illumination wavelength is specifically chosen to use all of the four Bayer channels on the color CMOS image sensor 24, since all the four Bayer channels have equal transmission at this wavelength, making it function like a monochrome sensor for holographic imaging purposes (see
Simultaneous Autofocusing and Phase Recovery of Bio-Aerosols Using Deep Learning
To simultaneously perform digital autofocusing and phase recovery for each individual aerosol particle 100, a CNN-based trained deep neural network 70 was used, built using Tensorflow. This CNN-network 70 is trained with pairs of defocused back-propagated holograms and their corresponding in-focus, phase recovered images (ground truth, GT images). These phase-recovered GT images are generated using a multi-height phase recovery algorithm using eight hologram measurements at different sample-to-sensor distances. After its training, the CNN-based trained deep neural network 70 can perform autofocusing and phase recovery for each individual aerosol particle 100 in the imaging FOV, all in parallel (up to a defocus distance of ±100 μm), and rapidly generates a phase-recovered, extended DOF reconstruction of the image FOV (
Aerosol Detection Algorithm
A multi-scale spot detection algorithm similar to that disclosed in Olivo-Marin, et al., Extraction of Spots in Biological Images Using Multiscale Products, Pattern Recognition 2002, 35, 1989-1996 (incorporated by reference herein) was used to detect and extract each aerosol ROI. This algorithm takes six levels of high pass filtering of the complex amplitude image per ROI, obtained by the difference of the original image and the blurred images filtered by six different kernels. These high-passed images are per-pixel multiplied with each other to obtain a correlation image. A binary mask is then obtained by thresholding this correlation image with three-times the standard deviation added to the mean of the correlation image. This binary mask is dilated by 11 pixels, and the connected components are used to estimate a circle with the centroid and radius of each one of the detected spots, which marks the location and rough size of each detected bio-aerosol. To avoid multiple detections of the same aerosol, a non-maximum suppression criterion is applied, where if an estimated circle has more than 10% of overlapping area with another circle, only the bigger one is considered/counted. The resulting centroids are cropped into 256×256 pixel ROIs (71acropped, 71bcropped), which are then fed into the bio-aerosol classification CNN 72. This second trained neural network algorithm takes <5 s for the whole FOV, and achieves better performance compared to conventional circle detection algorithms such as circular Hough transform, achieving 98.4% detection precision and 92.5% detection recall, as detailed in
Deep Learning-Based Classification of Bio-Aerosols
The classification CNN architecture of the second trained deep neural network 72 is shown in the zoomed-in part of
x′
k
=x
k+ReLU[CONVk
x
k+1=MAX(x′k+ReLU[CONVk
where ReLU stands for rectified linear unit operation, CONV stands for the convolution operator (including the bias terms), and MAX stands for the max-pooling operator. The subscript k1 and k2 denote the number of channels in the corresponding convolution layer, where k1 equals to the number of input channels and k2 expands the number of channels twice, i.e. k1=16, 32, 64, 128, 256 and k2=32, 64, 128, 256, 512 for each residual block (k=1, 2, 3, 4, 5). Zero padding is used on the tensor x′k to compensate the mismatch between the number of input and output channels. All the convolutional layers use a convolutional kernel of 3×3 pixels, a stride of one pixel, and a replicate-padding of one pixel. All the max-pooling layers use a kernel of two pixels, and a stride of two pixels, which reduces the width and height of the image by half.
Following the residual blocks, an average pooling layer reduces the width and height of the tensor to one, which is followed by a fully-connected (FC) layer of size 512×512. Dropout with 0.5 probability is used on this FC layer to increase performance and prevent overfitting. Another fully connected layer of size 512×6 maps the 512 channels to 6 class scores (output labels) for final determination of the class of each bio-aerosol particle 100 that is imaged by the device 10. Of course, additional classes beyond these six (6) may be used.
During training, the network minimizes the soft-max cross-entropy loss between the true label and the output scores:
where fj(xi) is the class score for the class j given input data xi, and yi is the corresponding true class for xi. The dataset contains ˜1,500 individual 256×256 pixel ROIs for each of the six classes, totaling ˜10,000 images. 70% of the data for each class is randomly selected for training, and remaining images are equally divided to validation and testing sets. The training takes ˜2 h for 200 epochs. The best trained model is selected to be the one that gives lowest soft-max loss for the validation set within 200 training epochs. The testing takes <0.02 s for each 256×256 pixel ROI. For a typical FOV with e.g., ˜500 particles, this step is completed in ˜10 s.
Shade Correction and Differential Holographic Imaging
A shade correction algorithm is used to correct the non-uniform illumination background and related shades observed in the acquired holograms. For each of the four Bayer channels, the custom-designed algorithm performs a wavelet transform (using order-eight symlet) on each holographic image, extracts the sixth level approximation as background shade, and divides the holographic image with this background shade to correct for non-uniform background-induced shade, balancing the four Bayer channels, and centering the background at one. For each air sample, two holograms are taken (before and after sampling the captured aerosols) to perform differential imaging, where this difference hologram only reveals the newly captured aerosols on the sticky coverslip. Running on Matlab 2018a using GPU-based processing, this part is completed in <1 s for the entire image FOV.
Digital Holographic Reconstruction of Differential Holograms
The complex-valued bio-aerosol images o(x, y) (containing both amplitude and phase information) are reconstructed from their differential holograms I(x, y) using free-space digital backpropagation, i.e.,
ASP[I(x,y);λ,n,−z2]=1+o(x,y)+t(x,y)+s(x,y) (7)
where λ=850 nm is the illumination wavelength, n=1.5 is the refractive index of the medium between the sample and the image sensor planes, and z2=750 μm is the approximate distance between the sample and image sensor. ASP[·] operator is the angular spectrum based free-space propagation, which can be calculated by the spatial Fourier transform of the input signal using a fast Fourier transform (FFT) and then multiplying it by the angular spectrum filter H(vx, vy) (defined over the spatial frequency variables, vx, vy), i.e.,
which is then followed by an inverse Fourier transform. In equation (7), direct back-propagation of the hologram intensity yields two additional noise terms: twin image t(x, y) and self-interference noise s(x, y). To give a clean reconstruction, free from such artifacts, these terms can be removed using phase recovery methods. In the reconstruction process, the exact axial distance between the sample and the sensor planes for the measurements may differ from 750 μm due to e.g., the unevenness of the sampling substrate or simply due to mechanical repeatability problems in the cost-effective mobile device 10. Therefore, some particles 100 might appear out-of-focus after this propagation step. Such potential problems are solved simultaneously using a CNN based reconstruction that is trained using out-of-focus holograms spanning; as a result, each bio-aerosol particle 100 is locally autofocused, and phase-recovered in parallel.
Comparison of Deep Learning Classification Results Against SVM and AlexNet
The classification precision and recall of the convolutional neural network (CNN) based bio-aerosol sensing method is compared against two other existing classification algorithms, i.e. support vector machine (SVM) and AlexNet. The results are shown in Table 1. The SVM algorithm takes the (vectorized) raw complex pixels directly as input features, using a linear classifier with Gaussian kernel. The AlexNet uses only two channels, i.e., the real and imaginary parts of the holographic image (instead of RGB channels). Both the SVM and AlexNet are trained and tested on the same training, validation, and testing sets, matching the CNN 72 described herein, also using a similar number of epochs (˜200).
Spot Detection Algorithm for Bio-Aerosol Localization
To crop individual aerosol regions for CNN classification, a spot detection algorithm is used to detect locations of each aerosol in the reconstructed image. As summarized in
K
i=↑2[Ki−1],i=1,2, . . . ,N (9)
where the initial filter K0=Gσ,l is the original Gaussian kernel. The augmented kernels (Ki) are used to filter the input image N times at different levels, i=1, 2, . . . , N, giving a sequence of smoothed images Ai. The difference of Ai−1 and Ai is computed as Wi=Ai−1−Ai. shrinkage operation is applied subsequently on each Wi to alleviate noise, i.e.:
where σ(Wi) is the standard deviation of Wi. Then, a correlation image P is computed as the element-wise product of W′i (i=1, 2, . . . N), i.e., P=Πi=1N W′i. A threshold operation, followed by a morphological dilation of 11 pixels is applied on this correlation image, which results in a binary mask. The centroid and area are calculated for each connected component in this binary mask, which represent the centroids and radii of the detected aerosols. If there are two detected regions with more than 10% of overlapping area (calculated from their centroid and radii), a non-maximum suppression strategy is used to eliminate the one with the smaller radius, to avoid multiple detections of the same aerosol.
Bio-Aerosol Sampling Experiments in the Lab
The pollen and mold spore aerosolization and sampling experimental setup is shown in
For the sampling of mold spores, the cultured mold agar substrate is placed on a petri dish inside the aerosolization chamber and the inlet clean air blows the spores from the agar plate through the sampling system. For pollen experiments, the dried pollens are poured into a clean petri dish, which are also placed inside the aerosolization chamber.
Bio-Aerosol Sample Preparation
Natural dried pollens: ragweed pollen (Artemisia artemisiifolia), Bermuda grass pollen (Cynodon dactylon), oak tree pollen (Quercus agrifolia) used in the experiments described herein were purchased from Stallergenes Greer, (NC, USA) (cat no: 56, 2 and 195, respectively). Mold species Aspergillus niger, Aspergillus sydoneii and Alternaria sp. were provided and identified by Aerobiology Laboratory Associates, Inc. (CA, USA). Mold species were cultivated on Potato Dextrose Agar (cat no: 213300, BD™—Difco, NJ, USA) and Difco Yeast Mold Agar (cat no: 271210 BD™—Difco, NJ, USA). Agar plates were prepared according to the manufacturer's instructions. Molds were inoculated and incubated at 35° C. for up to 4 weeks for growth. Sporulation was initiated by UV light spanning from 310-400 nm (Spectronics Corporation, Spectroline™, NY, USA) with a cycle of 12 hours dark and 12 hours light. Background dust samples were acquired by the mobile device 10 in outdoor environment along major roads in Los Angeles, Calif.
Using Deep Learning in Label-Free Bio-Aerosol Sensing
Previously, a similar holographic microscopy hardware setup was used to detect particulate matter (PM), and used a linear regression model to infer the particle size, without any classification capability. U.S. Patent Application Publication No. 2020/0103328 discloses such as system, which is incorporated herein by reference. Different from the previous work, here a rapid, automated and label-free sensing of bio-aerosol particles 100 is disclosed, which is a much more challenging task. Label-free sensing of bio-aerosol particles 100, especially using a portable and low-cost device, has various applications, but remains as an unmet challenge. Current technologies either rely on some manual post-processing of bio-aerosols captured in the field or do not have sufficient specificity towards classification labels. Moreover, all of them require complicated and costly equipment.
To perform highly-accurate label-free detection of bio-aerosol particles 100, two deep convolutional neural networks (CNNs) 70, 72 have been developed and successfully implemented. The first CNN 70 reconstructs the microscopic images of bio-aerosol particles 100 from in-line holograms with simultaneous auto-focusing and phase recovery capability. The second CNN 72 performs classification of the captured bio-aerosol particles 100 and achieved a >94% classification accuracy in experiments. In comparison, a support vector machine-based classification achieved only 78.1% precision and 73.2% recall on the same image dataset (see Table 1), which clearly illustrates the importance of using a deep CNN.
Comparison of Current System to Earlier Learning-Based Bio-Aerosol Detection Methods
Some of the earlier bio-aerosol detection systems used the auto-fluorescence signal of bio-aerosols flowing through a tapered air channel. Several machine-learning algorithms, including clustering, decision trees, support vector machines, boosting, and fully connected neural networks have been investigated for classification of bio-aerosols using auto-fluorescence information (and scattering information in some cases). However, compared to these earlier methods, the current approach has several major advantages.
First, measuring auto-fluorescence (and/or scattering) of bio-aerosols 100 gives only indirect and limited information on the morphology of bio-aerosols 100. In comparison, the method described herein uses lens-less digital holographic microscopy and deep-learning to reconstruct detailed microscopic images of bio-aerosol particles 100, with sub-micron spatial resolution. These reconstructed images 71a, 71p include detailed morphological information (in phase and amplitude channels) provides a direct measure of the captured bio-aerosols 100, and is very useful for highly-accurate and automated classification of bio-aerosols 100. It also provides microscopic images of all the captured particles 100 for experts to manually analyze the samples, if for example an unknown bio-aerosol is encountered.
Second, compared to conventional machine learning tools employed in these previous publications, the current method uses, in a preferred embodiment, two trained deep neural networks 70, 72 (CNNs—a first trained deep neural network 70 for reconstructing phase and amplitude images 71p, 71a of the captured bio-aerosol particles 100 and a second trained deep neural network 72 for automatic classification of the particles 100 in the reconstructed images). Deep CNNs typically perform much better than conventional machine learning algorithms in image classification; due to parameter sharing, a CNN uses less trainable parameters than e.g., a fully connected network of the same size, and thus is less likely to overfit to the training data. Also, as the network gets deeper, the CNN performance improves significantly. Moreover, due to the convolutional nature of a CNN, it is more robust to detect objects of interest regardless of their relative displacements within the reconstructed image.
Third, some of these devices use a tapered air channel, where the particles flow through a tapered nozzle and are analyzed individually (i.e., one by one). This serial readout design limits the sampling rate to ˜1.5 L/min. Accurate measurements of either too high or too low concentrations of aerosols are challenging for such designs. In comparison, the device 10 capture a single wide field-of-view hologram, where hundreds of bio-aerosol particles 100 can be reconstructed and rapidly analyzed, in parallel. Therefore, the current device 10 reaches a high sampling rate of 13 L/min and can account for a larger dynamic range of aerosol concentrations.
Lastly, earlier designs that are based on auto-fluorescence require strong UV or pulsed laser sources, sensitive photo-detectors, and high-performance optical components, which make the system relatively costly and bulky. In contrast, the device 10 described herein only uses a partially coherent light source 40 (e.g., a laser diode) and an image sensor 24, which requires minimal alignment. Thus, the device 10 is quite inexpensive (<$200 in its current low volume production), and light-weight (<600 g). The portability of the device 10 is very favorable in field testing applications.
Image Acquisition and Data Processing Time
The mobile bio-aerosol detection device 10 samples air at 13 L/min and screens bio-aerosols 100 captured on a transparent impactor substrate 34. Typically, 1-3 min of sampling is used to aggregate a statistically significant number of bio-aerosols 100 on the substrate 34, and holographic images are recorded immediately before and after this sampling. Currently, the image data are saved to and transferred from a USB drive that is attached to the device 10. However, the device 10 can also be programmed to connect directly to a remote server or other computing device 52 (e.g., a local PC) to transfer data wirelessly. It was found that during the impaction-based sampling, a large pollen particle 100 occasionally deforms the sticky substrate when it impacts, which acts as a deformed lens and distorts the reconstructed image of the pollen 100. This deformation on the polymer capture surface automatically heals itself after 8-10 min after impaction. To keep the results to be consistent, the holographic images in pollen-related experiments and field tests were captured 15 min after sampling. By using a customized stiffer sampling substrate, and/or using a different sampling strategy other than impaction this passive wait time can be eliminated or reduced significantly.
The image processing workflow, as shown in
Automated label-free sensing and classification of bio-aerosols 100 was demonstrated using a portable and cost-effective device 10, which is enabled by computational microscopy and deep-learning. Greater than 94% accuracy in automated classification of six different types of aerosols was achieved, which were selected since they are some of the most common bio-aerosol types and allergens, having a significant impact on human health.
In the experiments conducted herein, the locations of individual bio-aerosols 100 that are captured by the device 10 are extracted in a local image with a fixed window size for deep learning-based classification. This approach, while powerful in general, can also cause some classification problems when there is a bio-aerosol 100 larger than the selected window size, or when more than one type of bio-aerosol particle 100 falls into the same window (coming physically close to each other), as illustrated in
The mobile bio-aerosol sensing device 10 is based on a quantitative phase imaging approach that uses digital holography at its core. Compared to incoherent light microscopy, digital holography also records the phase information of the sample in addition to its amplitude, and this phase information is extremely useful especially for weakly-scattering objects, providing better contrast through the phase channel for such objects. To make better use of this additional phase information is reconstructed for each bio-aerosol particle 100, increasing the spatial resolution of the mobile device 10 using e.g., an array of illumination light sources 40 to achieve pixel super resolution could be an option; alternatively one can also introduce additional illumination wavelengths in the device 10 that can improve resolution and also provide additional spatial features at different parts of the optical spectrum, which might be especially useful for the classification network 72 to recognize different bio-aerosol types based on their absorption and refractive properties. Lastly, one embodiment of the device 10 relies on a disposable cartridge, which requires periodic replacement. Although this cartridge can be quickly replaced within a few seconds, the device 10 design can be further improved using a different particle sampling strategy other than impaction.
While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. The invention, therefore, should not be limited except to the following claims and their equivalents.
This Application claims priority to U.S. Provisional Patent Application No. 62/838,149 filed on Apr. 24, 2019, which is hereby incorporated by reference in its entirety. Priority is claimed pursuant to 35 U.S.C. § 119 and any other applicable statute.
This invention was made with government support under Grant Number 1533983, awarded by the National Science Foundation. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
62838149 | Apr 2019 | US |