The disclosed subject matter relates to a portable multimodal optical sensing system—preferably for automated and intelligent food safety inspections.
Food safety incidents are frequently reported in the news media and often result in food recalls and a rising number of food scares for consumers, which cause economic losses for food industries in both short and long terms. Foodborne outbreaks are one of the major sources for illness and death. The U.S. Centers for Disease Control and Prevention (CDC) estimates that each year 48 million people (roughly one in six Americans) are sickened or are adversely affected, 128,000 are hospitalized, and 3,000 die from foodborne diseases in the United States.
Foodborne pathogenic bacteria are linked to many outbreaks, and their opportunities to reach consumers are increasing as the farm-to-fork distance tends to grow in the global food supply chain. Three major bacteria, including Salmonella, E. coli, and Listeria, have caused more than 90% of U.S. multistate foodborne outbreaks, as highlighted by the recent E. coli O157:H7 outbreak in 27 states related to romaine lettuce in late 2019.
Food safety research explores methods to evaluate, control, and reduce potentially harmful substances in food, including both natural and introduced biological, chemical, and physical contaminants. In most food safety incidents, identification of the contaminant is usually the first step of the investigation, which generally relies on using suitable sensing technologies (e.g., optical, electrical, acoustical, and biological).
Optical sensing techniques are being investigated and developed for nondestructive food safety inspection based on different spectroscopy and imaging modalities in a wide electromagnetic spectrum, such as x-ray, ultraviolet, visible, near infrared, infrared, terahertz, fluorescence, and Raman. Despite significant progress, the high demand for safe and healthy food, strict regulations on food supply chains, and the great variety of existing and new sources of food contaminants require improved sensing technologies to achieve new levels of accuracy, speed, and intelligence.
Artificial intelligence (AI) is currently driving a new wave of innovation in many industries—including agriculture. Growth in data generation (e.g., images, videos, audios, and text), advances in algorithms (e.g., machine learning (ML) and deep learning (DL)), easy-to-use commercial and open-source software libraries (e.g., MATLAB, TensorFlow, and PyTorch), and increased computing power (e.g., personal computers, graphics processing units (GPUs)), field-programmable gate arrays (FPGAs), mobile devices, and cloud computing), AI technologies are evolving at a remarkable speed and are poised to revolutionize the agriculture industry.
Sensing technologies in the food and agriculture area are no exception. The food and agriculture industries are becoming one of the prime frontiers for AI-driven developments and transformations. For example, AI-based techniques are ideally suited for technologies such as intelligent computer vision for grain crop production and smart drying techniques for fresh foods. Deep neural networks may be used for plant phenotyping and foodborne pathogen classification.
AI-driven data analysis methods generally learn information directly from data—rather than relying on existing physical models or mathematical equations—which opens new avenues for analyzing the data collected from sensing systems. Models developed using ML and DL techniques can be used to classify and predict unknown samples, especially when the sensing signals are low or difficult to explain using the existing knowledge. For example, when classical linear chemometric methods (e.g., partial least squares (PLS)) do not suffice to extract useful information from weak spectral data of Raman or surface-enhanced Raman spectroscopy (SERS) measurements, implementing ML and DL techniques instead may lead to better results.
Recently combinations of the Raman and the DL techniques have been successfully used to rapidly identify pathogenic bacteria. In addition to the use of the AI capabilities for offline data analysis, the incorporation of the AI capabilities into the sensing systems (e.g., smart sensors and instruments) is also attracting growing attention and is established as a new developmental axis for novel sensing technologies. Pre-established ML or DL models can be deployed to companion software of the sensing systems or directly to the AI-specific hardware (e.g., AI accelerators using GPUs or FPGAs) for real-life detection applications with a level of accuracy that was not previously possible.
During the past decade, the inventors developed macro-scale Raman chemical imaging (RCI) technologies for food safety and quality research to remedy the lack of related commercial integrated systems. Two point-scan RCI systems were developed using 785 nm and 1064 nm point lasers, which are mainly used for measuring low- and high-fluorescence food samples, respectively. Each point-scan system uses a whiskbroom method for hyperspectral Raman image acquisition from samples carried by a two-axis positioning stage. A more efficient line-scan RCI system using a 5 W 785 nm line laser based on a scanning mirror was developed and patented to realize high-throughput Raman imaging for large food sample inspection.
A one-axis positioning stage is used to move the samples to accumulate hyperspectral data using a pushbroom method. The line-scan system was upgraded using a 30 W 785 nm line laser based on a cylindrical lens to enhance Raman scattering signals from the samples. Dispersive Raman spectrographs are used in both point- and line-scan systems, which can all be configured to backscattering RCI mode for surface inspection and spatially offset Raman spectroscopy (SORS) mode for subsurface inspection.
The inventors' macro-scale Raman technologies have found many practical food safety and quality applications. Examples of the RCI applications include detecting chemical adulterants mixed in food powders, lycopene changes from tomatoes during ripening, bacteria-infected watermelon seeds, veterinary drugs in pork, and bones in fish fillets. Examples of the SORS applications include nondestructive evaluation of internal maturity for tomatoes, detection of gelatin-encapsulated powders, and through-package inspection of butter for adulteration.
The 785 nm point-scan system was also configured to implement gradient temperature Raman spectroscopy (GTRS), which is a patented technique that applies the precise temperature gradients used in differential scanning calorimetry (DSC) to Raman spectroscopy. Commercial integrated Raman systems, such as FT-Raman spectrometers, Raman microscopes, and Raman microplate readers, are used in many research laboratories to provide solutions for well-defined applications. However, these systems are usually bulky and pricey, and neither flexible nor versatile enough to conduct spectroscopy and imaging experiments for food and agricultural products.
The need exists for compact automated sensing devices and methods for quick and routine measurement and analysis of chemical and biological content of sample materials. The system described herein comprises a new portable multimodal optical sensing system with embedded AI capabilities based on dual-band laser dispersive Raman techniques for automated and intelligent food safety inspection.
The system described herein was designed using modular hardware components (e.g., lasers, spectrometers, lights, cameras, and sample handling unit) that can be customized and optimized for a broad range of food safety applications. The current instrument configuraton offers more flexibility and versatility than the existing commercial Raman systems on the market.
This disclosure is directed to a portable multimodal optical sensing system. The system includes a sample holder connected to an XY moving base. The sample holder holds at least one analyte sample. The system further comprises at least two separate cameras, at least two separate Raman laser excitation sources, and multiple illumination sources, including at least a UV light and a sample backlight. A portable base and housing at least partially encloses the XY moving base, the cameras, the Raman excitation sources, and the illumination sources. A computer/processor (which is preferably external to the portable base and housing) controls and communicates with the XY moving base, the cameras, the Raman excitation sources, and the illumination sources.
The system is structured so that the computer/processor directs the XY moving stage to enable at least one of the cameras to scan an analyte sample so that the computer/processor determines an analysis protocol for analyzing the analyte sample. The XY moving stage is configured to move in accordance with the analysis protocol so that the at least one analyte sample is analyzed to determine whether a bacterial or a chemical contaminant is present in each analyzed analyte sample. If a bacterial or chemical contaminant is present, the computer/processor identifies the bacterial or chemical contaminant.
The patent or application file associated with this disclosure contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Note that assemblies/systems in some of the FIGs may contain multiple examples of essentially the same component. For simplicity and clarity, in some FIGs, only one (or a few) of the example components may be identified with a reference number. Unless otherwise specified, other non-referenced components with essentially the same structure as the exemplary component should be considered to be identified by the same reference number as the exemplary component. Also note that the images shown in the FIGs are not intended to be precisely to scale.
The current portable multimodal optical sensing system 30 is schematically illustrated in
The system 30 is designed to analyze macro-scale samples. For the purposes of this disclosure, a “macro-scale analyte sample” means a sample with a surface area of about 5 mm or greater.
The system 30 has multiple optical sensing capabilities. The optical sensing capabilities include backlight, fluorescence, color, and Raman sensing for rapid and flexible chemical and biological sensing. The system 30 includes at least two separate color (red-green-blue known as “RGB”) image acquisition systems (i.e., “cameras”). The cameras may use active-pixel sensors such as complementary metal oxide semiconductor (CMOS) sensors. Samples are not viewed through a common camera aperture and do not depend on software-based image alignment. For the purposes of this disclosure, the term “non-common aperture” means that the system 30 includes at least two separate cameras that can be individually manually, or automatically directed to an analyte sample through two separate apertures (one aperture corresponding with each individual camera).
System data can be captured for any one or a combination of the following sensing methods: visible reflectance, backlight-based visible transmittance, excitation fluorescence, 785 nm Raman spectral imaging, 1064 nm Raman spectral imaging, and visible RGB color reflectance (photography). The system 30 includes a colony-counting function. In the automated analysis mode, the system selects the appropriate Raman laser wavelength for the analysis of each selected sample to optimize noise ratio (i.e., 785 nm and 1064 nm lasers for low- and high-fluorescence samples, respectively).
The system 30 is designed for comprehensive automated operation. Essentially, once an operator initiates the sample analysis process, the system processor/computer 90 directs at least one of the cameras to scan each of the analyte samples present in the sample holder. Additionally or alternatively, at least one of the cameras may scan one or more of the analyte samples and identify a counted number and location of targets (regions of interest) within each of the samples. Based on the types of samples present, the system computer/processor 90 determines (optionally using embedded AI) the components and an analysis protocol to examine the respective samples, which may include mechanically syncing camera imaging and/or Raman laser probe excitation and analysis. The samples are then analyzed using the general process described in the flow chart shown in
The system described infra is a preferred embodiment. Alternative embodiments may include variations in components, limitations, and capabilities consistent with the purpose and function of the affected system/component.
As noted supra, the system comprises at least two separate laser units 32, 34 with output wavelengths at 785 nm 32, and 1064 nm 34, (Innovative Photonic Solutions, Monmouth Junction, N.J.) which are used for Raman measurement of low- and high-fluorescence food samples, respectively. For both lasers units 32, 34, the maximum power is 450 mW with a bandwidth of 0.1 nm and a working distance of 7.2 mm.
Each laser unit 32, 34 is equipped with an adjustable laser component 36, 38 which includes a wavelength stabilized laser source with Raman filter packs and beam shaping optics. Each of the adjustable laser components 36, 38 has an associated laser probe 37, 39. The Raman probes 37, 39 are used to focus the respective lasers components 36, 38 on the sample surface 40 and collect scattering signals. Two bifurcated optical fiber bundles are used to connect two Raman laser components 36, 38 and two dispersive Raman spectrometers (QEPro for the 785 nm laser 42, and NIR Quest for the 1064 nm laser 44, Ocean Insight, Orlando, Fla.), respectively, for transferring the laser and the Raman signals. Both Raman spectrometers 42, 44 use a reflection grating to disperse the light into different wavelengths. The 785 nm spectrometer 42 uses an 18-bit back-illuminated charge-coupled device (CCD) detector with 1024 pixels to collect the dispersed light in a spectral region of 790-1020 nm. The 1064 nm spectrometer 44 uses a 16-bit InGaAs detector with 512 pixels for spectral acquisition in a wavelength range of 1070-1450 nm. An adjustable mounting clamp 35 is used to fix the two Raman laser components 36, 38 which can be vertically adjusted to accommodate different sample 40 heights.
Three LED lights, including a white backlight 46, a UV (365 nm) ring light 48, and a white ring light 50 (Advanced Illumination, Rochester, Vt.), are used to provide different illuminations for automated sampling based on machine vision measurement of the samples 40.
The underlying white backlight 46 is mounted on an XY moving stage 52, and it primarily illuminates transparent or semi-transparent samples 40 (e.g., bacterial colonies grown on agar in a Petri dish 54) for transmission image measurement. The active lighting area of the backlight 46 is 100×100 mm2. The overhead UV 48 and white ring lights 50 are mainly used for fluorescence and color image measurement of non-glare samples 40, respectively. The internal and external diameters of the two identically sized ring lights 48, 50 are 35 mm and 75 mm, respectively. An LED light controller 56 with three independent outputs is used for on-off control and intensity adjustment for the three lights 46, 48, 50. Machine vision images of the samples 40 are collected by two miniature color cameras 58 and 60 with 2448×2048 pixels (Blackfly S USB3 Color, Teledyne FLIR, Wilsonville, Oreg.), each equipped with a 12 mm fixed focal length imaging lens (Edmund Optics, Barrington, N.J.). Two adapter rings are used to mount the UV 48 and white 50 ring lights on the filter thread of the lenses attached to the color cameras 58 and 60. For the color camera 58 with the UV ring light 48, a multi-band bandpass filter 49 (e.g., 538/685 nm dual-band bandpass filter or 475/543/702 nm triple-band bandpass filter, Semrock, Rochester, N.Y.) is inserted in the lens for multispectral fluorescence imaging.
A pair of direct-drive linear translation stages (DDSM100, Thorlabs, Newton, N.J.) are combined to form a two-axis moving stage 52 for positioning the samples 40 under the two Raman probes 37, 39. The movement of the stage 52 is controlled by two servo motor controllers 53 that are mounted on a USB controller hub 62, which provides power supply to the controllers 53 and USB communication to an external processor/computer 90. The XY stage 52 can move in a square area of 100×100 mm2 with a maximum resolution of 0.5 μm and a maximum speed of 500 mm/s, which is ideal for high-accuracy and high-speed sample positioning for automated Raman spectral and imaging measurement. The Raman laser components 36, 38, and associated probes 37, 39, the machine vison cameras 58, 60, and lights 46, 48. 50, the XY stage 52, and the sample handling unit are housed in an aluminum-framed enclosure 64 with black foam boards (not shown) to avoid the influence of ambient light. The entire sensing system 30 is built on a 30×45 cm2 optical breadboard 66, making it compact and easily portable instrument so that it is suitable for rapid field and on-site food safety inspection.
As noted supra, for the purposes of this disclosure, the term “portable” means capable of being moved and carried onto a job site by a single average operator. In the preferred embodiment, the system is about 30 cm×45 cm×35 cm and weighs about 20 kg, i.e. about 12 in ×18 in ×13.75 in and weighs about 44 pounds.
The system 30 can be used for both manual and automated Raman sampling. The manual measurement is conducted by manually positioning the sample 40 on a Petri dish 54 or one of the well plates (for example 76) under the Raman probes 36, 38, which is followed by XY point-scan spectral and image acquisition in a manner that is similar to that of the inventors' previously developed Raman systems. On the other hand, the system 30 can carry out fully automated sampling for samples 40 randomly scattered in Petri dishes 54 or placed in fixed patterns in customized selected well plates 75 (see
As shown in
For the purposes of this disclosure, a “sample spot” may also be referred to as a “region of interest” or an analyte, or, in some cases, simply a “sample”.
As best shown in
Significantly, the XY moving stage 70 for holding a selected well plate 75 is functionally identical to the XY moving stage 52 for holding a Petri dish assembly 55.
In addition to an exemplary well plate-based moving stage 70,
When sampling a selected well plate 75, the initial position of the plate on the XY stage is predetermined to align the laser beam probe 37, 39 with the first well holding the sample (usually at top left corner of the plate). Along both X and Y directions of the selected well plate 75, the number of the wells to be sampled and the distance between adjacent wells can be adjusted using the system software, which provides a flexible way to measure the samples placed in selected well plates 75 with different number and size of the wells. As a selected well plate 75 is moved by the XY stage 70, the sample in each well is exposed to a laser component 36, 38 and associated probe 37, 39. Automated sample analysis is then executed in a well-by-well manner. For each sample in the Petri dish 54 or the selected well plate 75, the number of Raman spectra to be collected (e.g., 2×2 or 100×100) and the step size used for XY point scan of each spot (e.g., 0.1 mm or 1.0 mm) can be controlled by the system software to satisfy measurement requirements for different types of the samples. The automated Raman spectral acquisition method for the Petri dish XY moving stage 52 and the selected well plate XY moving stage 70 can be adapted for other point spectroscopy techniques for rapid and flexible sampling, such as reflectance and fluorescence.
In operation, a computer-based controller/processor exchanges operating instructions and data with the spectrometer units, imaging cameras, lighting hardware, and the XY moving stage and other instruments in the portable multimodal platform via a wired or wireless communication means. As shown in
A flow chart of the current sample analysis process is shown in
The operator also keys in information regarding whether the sample/analyte is in a Petri dish 54 or a selected well plate 75. If the analyte is in a Petri dish 54, the Petri dish (including the analyte) is moved into a field of view of a color camera (58 or 60). Once the Petri dish 54 is in position, the analyte is illuminated by an LED light, (e.g., a white backlight 46, UV ring light 48, or a white ring light 50) and an image is acquired by the relevant camera 58, 60.
The image is processed in real time to determine the number and position of all the sample spots in the Petri dish 54. The Petri dish 54 is then moved to align a sample spot with either the 785 nm laser probe 37, or the 1064 nm laser probe 39. Spectra are then acquired for each identified sample spot using the previously selected scan number and step size for the X and Y directions. The acquired data and (optionally) the real time identification results are shown in the Petri dish imaging software—as shown in
The system is then queried regarding whether the currently analyzed sample spot is the last sample spot in the Petri dish 54. If other unanalyzed sample spots remain, then the Petri dish 54 is repositioned to align the 785 nm laser probe 37, or the 1064 nm laser probe 39 with one of the remaining unanalyzed sample spots—and the process described supra repeats itself—until all the sample spots have been analyzed.
After all sample spots are analyzed, the Petri dish 54 is returned to its original position and the acquired data is processed and presented in a format that is usable by the operator.
With regard to the well plate 75 sample analysis process, an operator initially places a selected well plate 75 containing analyte samples onto an XY moving stage 70. The operator then enters/inputs a scan number and XY step size into the system analysis software. The operator also enters/inputs the number of wells on the well plate 75 and the distance between adjacent wells. The well plate XY moving stage 70 then moves the selected well plate 75 so that a designated first well aligns with either the 785 nm laser probe 37, or the 1064 nm laser probe 39. Spectra are then acquired for each identified well using the previously selected scan number and step size for the X and Y directions. The acquired data and the real time identification results are shown in the well plate sampling software—as shown in
The system is then queried regarding whether the currently analyzed sample well is the last well on the well plate 75. If other unanalyzed sample wells remain, then the well plate 75 is repositioned to align the 785 nm laser probe 37, or the 1064 nm laser probe 37 with one of the remaining unanalyzed sample wells—and the process described supra repeats itself—until all the wells in the well plate have been analyzed.
After all the wells are analyzed, the well plate 75 is returned to its original position and the acquired data is processed and presented in a format that is usable by the operator.
Software for the multimodal sensing system was developed using Lab VIEW (v2017, National Instruments, Austin, Tex.) in Microsoft Windows operating system on a laptop computer. An exemplary color display format associated with the Petri dish embodiment is shown in
The saved data can be analyzed offline using commercial software packages such as ENVI (Harris Geospatial Solutions, Broomfield, Colo.) or in-house developed programs by MATLAB (MathWorks, Natick, Mass.). Meanwhile, real-time image and spectral processing and artificial intelligence functions are integrated into the system software, which can be used to identify and label interesting targets in samples using pre-established classification models.
Spectral calibrations for 785 and 1064 nm Raman spectrometers were conducted using two granular chemicals (polystyrene and naphthalene) as Raman shift standards. When excited by a laser with a fixed wavelength, the chemical standards can generate spectral peaks with known relative Raman shift positions (i.e., wavenumbers), which can be used to calibrate the Raman spectroscopy and imaging systems. For each of the two chemical standards, a pure sample was placed in a Petri dish with a diameter of 47 mm. For each chemical, a 4×4 point scan with a step size of 1 mm for both X and Y directions was performed on the Petri dish sample using both 785 and 1064 nm laser-spectrometer combinations.
Spatial calibration was carried out for the machine vision cameras to translate the pixel unit in acquired images into the real-world length unit that can be used to drive the XY moving stage 52. Using a working distance (from lens to sample surface) of 156 mm and illumination of the UV ring light 48, an image was collected from a piece of white paper printed with 5 mm square grid that was placed in a 90 mm Petri dish. The original image of 2448×2048 pixels was cropped to 2048×2048 pixels to reduce the unnecessary margin area. The resulting grid is shown in
where D is the physical position in mm, P is the image coordinate in pixels, R is the spatial resolution in mm/pixel, C is the calibration distance in mm, and subscripts X and Y denote the X and Y directions of the image and the moving stage 52. The calibration distances (Cx and Cv) were determined by aligning the 785 nm laser point with round black dots (4 mm diameter and 10 mm center-to-center distance) on a piece of calibration grid paper. Under the current setup, Cx and Cy were determined at 4.5 and 97.7 mm, respectively (shown under “Spatial Calibration” in the system software in
The developed system 30 is intended to be used as a general tool for biological and chemical food safety inspection in regulatory and industrial applications. One primary application is rapid, automated, and intelligent identification of foodborne pathogens for regulatory purposes. The capability of the system 30 was demonstrated by an application for identifying common foodborne bacteria prepared using culture-based method.
The grayscale images were then converted to the binary images using a local adaptive thresholding method. Specifically, the “Background Correction” option in Lab VIEW's IMAQ Local Threshold function was used to eliminate non-uniform (
For example, the pixel coordinates (PX and PY) for the mass center of one E. coli colony in the upper left of the Petri dish (
To explore the potential of the system 30 for identification of foodborne bacteria, bacteria of five common species were prepared for a demonstration experiment. The species included Bacillus cereus from American Type Culture Collection (Manassas, Va.). The species also included E. coli, Listeria monocytogenes, Staphylococcus aureus, and Salmonella spp. from the culture collection in the USDA/ARS Environmental Microbial and Food Safety Laboratory. Isolates from the five bacterial species were grown on same nutrient nonselective agar (BBL, BD, Franklin Lakes, N.J.) in 90 mm Petri dishes.
For each species, one patched agar plate was used for automated spectral measurement from all the individual colonies in the Petri dish. Using the 785 nm laser and an exposure time of 1.0 s for the 785 nm spectrometer, a 2×2 point scan with a step size of 1.0 mm was performed for each colony, and a mean spectrum was calculated and then normalized at the maximum intensity to minimize the effect of background intensity fluctuation. A total of 222 colonies from five Petri dishes were sampled, and their normalized spectra were used for classification.
In addition, fluorescence baseline and corrected Raman spectra were obtained from the individual normalized spectra by a baseline correction method using adaptive iteratively reweighted penalized least squares, which were also used for the classifications with the purpose of comparing the performance with the original Raman spectra. The mean spectra of original Raman, fluorescence baseline, and corrected Raman for each species are plotted in
Note that in
The three types of the spectral data were labeled with the five bacterial species. Each labeled dataset was input to the Classification Learner app in MATLAB (R2021b, MathWorks, Natick, Mass.), in which seven optimizable classifiers, including Naive Bayes, decision tree, ensemble, k-nearest neighbor (KNN), discriminant analysis, neural network (NN), and support vector machine (SVM), were used for the machine learning classifications. Values of hyperparameters for all the models (e.g., model and algorithm parameters such as maximum number of splits for a decision tree, distance metric of a KNN, and box constraint level of an SVM) were automatically selected using the hyperparameter optimization function within the app with a goal of minimizing the classification error. Equal penalty was assigned to all misclassifications to simplify the evaluation of misclassification costs and model training and validation. Five-fold cross-validation method was used to evaluate accuracies of the seven classification models using the three spectral datasets. Each dataset including 222 spectra was randomly partitioned into five disjoint folds. A model was trained using out-of-fold data and the performance was evaluated using in-fold data. The average accuracy was calculated over all folds. To minimize variations from random dataset partitioning, training and validation of each model was repeated ten times. The overall accuracy of each classification model was obtained by calculating the average error over the ten runs.
Four bacterial species, including E. coli, Listeria monocytogenes, Staphylococcus aureus, and Salmonella spp., were perfectly classified. Only three Bacillus cereus colonies (out of a total of 222 samples) were misclassified as Staphylococcus aureus, and the overall accuracy was achieved at 98.6%. These results suggested that both Raman and fluorescence signals from the colonies grown on the agar contributed to the bacterial species differentiation, and machine learning classification models compensated for small spectral differences among the five species. The trained classification models can be saved and used in the Lab VIEW system software to make real-time predictions for newly collected spectral data, which is faster than the existing detection techniques, such as polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA). This demonstration example was carried out using a limited number of bacterial species and colonies on a single agar background. The system is also capable of more complex work directed to more species, colonies, and agars for rapid, automated, and intelligent identification of foodborne bacteria.
The invention described herein comprises an innovative multimodal optical sensing system and protocol based on dual-band laser Raman spectroscopy. The system hardware, software, and system integration techniques provide a new tool for automated and intelligent food safety inspection. Two pairs of lasers and spectrometers working at different wavelengths enable high-quality Raman scattering signals to be obtained from both low- and high-fluorescence food samples in a single measurement system. By utilizing machine vision and motion control techniques, the system can conduct fully automated spectral acquisition for samples randomly scattered in Petri dishes or placed in customized well plates, which is more flexible and versatile than commercial integrated Raman systems using standard microplates. The concept of automated spectral collection can be extended beyond Raman to other spectroscopy techniques.
Interesting targets in the samples can be identified and labeled using real-time image and spectral processing and artificial intelligence functions that are integrated into the in-house developed system software. Classification models that use machine learning approaches can compensate for small spectral differences in direct Raman measurements and improve prediction accuracies for various food safety applications. The system is intended to be used by food safety regulatory agencies as an initial screening tool for quick species identification of common foodborne bacteria. The prototype is compact and easily portable, which makes it suitable for field and on-site food safety inspections in wide variety of industrial and regulatory applications.
For the foregoing reasons, it is clear that the subject matter described herein provides an innovative portable multimodal optical sensing system that may be used in multiple varying applications. The current system may be modified in multiple ways and applied in various technological applications. The disclosed method and apparatus may be modified and customized as required by a specific operation or application, and the individual components may be modified and defined, as required, to achieve the desired result.
Although most of the materials of construction are not described, they may include a variety of compositions consistent with the function described herein. Such variations are not to be regarded as a departure from the spirit and scope of this disclosure, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.
The amounts, percentages and ranges disclosed in this specification are not meant to be limiting, and increments between the recited amounts, percentages and ranges are specifically envisioned as part of the invention. All ranges and parameters disclosed herein are understood to encompass any and all sub-ranges subsumed therein, and every number between the endpoints. For example, a stated range of “1 to 10” should be considered to include any and all sub-ranges between (and inclusive of) the minimum value of 1 and the maximum value of 10 including all integer values and decimal values; that is, all sub-ranges beginning with a minimum value of 1 or more, (e.g., 1 to 6.1), and ending with a maximum value of 10 or less, (e.g. 2.3 to 9.4, 3 to 8, 4 to 7), and finally to each number 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 contained within the range.
Unless otherwise indicated, all numbers expressing quantities of ingredients, properties such as molecular weight, reaction conditions, and so forth as used in the specification and claims are to be understood as being modified in all instances by the implied term “about.” If the (stated or implied) term “about” precedes a numerically quantifiable measurement, that measurement is assumed to vary by as much as 10%. Essentially, as used herein, the term “about” refers to a quantity, level, value, or amount that varies by as much 10% to a reference quantity, level, value, or amount. Accordingly, unless otherwise indicated, the numerical properties set forth in the following specification and claims are approximations that may vary depending on the desired properties sought to be obtained in embodiments of the present invention.
Unless otherwise indicated, all numbers expressing quantities of ingredients, properties such as molecular weight, reaction conditions, and so forth as used in the specification and claims are to be understood as being modified in all instances by the term “about.” Accordingly, unless otherwise indicated, the numerical properties set forth in the following specification and claims are approximations that may vary depending on the desired properties sought to be obtained in embodiments of the present invention. As used herein, the term “about” refers to a quantity, level, value, or amount that varies by as 10% to a reference quantity, level, value, or amount.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention belongs. Although any methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present invention, the preferred methods and materials are now described.
The term “consisting essentially of” excludes additional method (or process) steps or composition components that substantially interfere with the intended activity of the method (or process) or composition, and can be readily determined by those skilled in the art (for example, from a consideration of this specification or practice of the invention disclosed herein). The invention illustratively disclosed herein suitably may be practiced in the absence of any element which is not specifically disclosed herein. The term “an effective amount” as applied to a component or a function excludes trace amounts of the component, or the presence of a component or a function in a form or a way that one of ordinary skill would consider not to have a material effect on an associated product or process.