High Dynamic Range (HDR) imaging is a digital imaging technique that captures a greater dynamic range between the lightest and darkest areas of an image. A process for automatically optimizing a dynamic range of pixel intensity obtained from a digital image is described in U.S. Pat. No. 7,978,258 to Christiansen et al. HDR takes several images at different exposure levels and uses an algorithm to stitch them together to create an image that has both dark and light spots, without compromising the quality of either one. However, HDR can present a distortion of reality because it distorts the intensity of the image overall. Accordingly, HDR techniques that enhance contrast without distorting the intensity of the image continue to be sought.
Techniques for enhancing an image of a biological sample are described in WO 2012/152769 to Allano et al. Among the problems with imaging such samples identified in Allano et al. are:
i) the size of the colonies being viewed;
ii) the proximity of one colony to another;
iii) the color mix of the colonies;
iv) the nature of the Petri Dish; and
v) the nature of the culture medium; as well as other factors.
Allano et al.'s proposed solution to the problem of imaging a biological sample is to prepare a source image created from images obtained at each color, removing predetermined absorption effects for the culture medium and the culture vessel and determining a value for photon flux and exposure time using a predetermined exposure to obtain an image which is then dissected into luminosity zones. From that, image luminosity is obtained and used to determine if the value for photon flux and exposure time used was correct or if a new value for photon flux and exposure time should be used for image capture.
Problems with the above techniques is that they do not provide a system with an ability to provide imaging conditions that can detect very subtle changes in contrast that are required for image-based detection/identification of microbes on growth media. Because image-based evidence of microbes and/or their growth on media is (or at least can be) difficult to detect, more robust techniques for imaging such samples are sought.
Described herein is a system and method that enhances the image capture for images with low or variable contrast. One example of such a challenging imaging environment is that of bacterial colonies growing on agar growth plates. The bacterial colonies reflect the light differently from the agar. In addition, the bacterial colonies can vary from light colors to dark colors and reflect light differently than the agar. The time to capture an image of a colony is short (approximately one second). Typically, an image of the growth plate is taken every 3 to 6 hours.
An image is acquired in a series of N image acquisitions at each time interval “x” (i.e. t0, t1 . . . tx). The first acquisition (N=1) uses default values for the light intensity and exposure time, referred to herein as “photon flux and exposure time.” The photon flux value defines the number of photons reaching the scene per unit time and unit area ((photon quantity)·(time−1)·(area−1)). The time being the integration time at the camera's sensor. The exposure time determines the number of photons captured by the sensor for one frame acquisition. Said another way, photon flux is rate of flow of photons from the light source and exposure time influences the quantity of those photons received by the sensor for image acquisition. For a given photon flux, exposure time controls image intensity.
One skilled in the art is aware of many different ways to control photon flux to influence image intensity. As noted above, one technique controls the exposure time of the image. There are other techniques that can be used to control of the intensity of the light transmitted to the sensor. For example, filters, apertures, etc. are used to control the photon flux, which in turn, controls the intensity. Such techniques are well known to the skilled person and not described in detail herein. For purposes of the embodiments of the invention described herein, the light intensity is set constant and exposure time is the variable used to control photon flux integration.
In the embodiments where photon flux is controlled by controlling the exposure time, initial exposure time values are obtained from system calibration. The system is calibrated using a library of calibration plates. Baseline calibration is obtained as a function of plate type and media type. When the system is used to interrogate new growth plates the calibration data for a particular plate type and media type is selected. In this regard, growth plates can be: mono-plates (i.e. for one media); bi-plates (two media); tri-plates (three media), etc. Each type of growth plate present unique imaging challenges. The calibration provides a default exposure time for capturing the first image (image N=1) of the growth plate. The calibration also makes it possible for the system (or system operator) to determine which parts of the image are plate (i.e. not background) and, of the plate portions of the image, which portions are media (the nutrients used to cultivate the colonies) and which portions are, at least potentially, colonies.
Image N=1 of a growth plate is captured using the default values obtained from calibration. If an averaging technique is used to capture the digital images of the growth plate, the bright pixels will have a better signal-to-noise ratio (SNR) than the dark pixels. In the method described herein, signals are isolated for individual pixels, regardless of whether the pixels are light or dark. For a predetermined number of pixels, the intensity, exposure time and SNR are determined. A “map” of these values in the image context is prepared. From this map, a new exposure time that will preferably not saturate more than a predetermined fraction of pixels is selected for the N+1 image acquisition. Preferably, an exposure time in which only a very small fraction of pixels (or less) are saturated is determined and used to capture the final image.
From this a map of the SNR for each pixel where the SNR is updated (i.e. the grey value is refined and the SNR improved for the non-saturated pixels) for each non-saturated pixel is generated. An image is simulated based on this map.
An optimization function algorithm is used to map each grey value intensity for each pixel to the required exposure time corresponding to the optimal SNR for the pixel. The optimization algorithm begins by looking at the initial image (N=1), which was captured using the predetermined default exposure time. An intensity, exposure, and SNR map is generated for the entire image. The exposure time for each pixel is adjusted based on image N and another image (N+1) is captured. As stated above, the new exposure time is chosen that will saturate the signals of the dark parts, resulting in overexposure of the light parts. The intensity map, exposure map, and SNR map are updated for each pixel. This is an iterative process and images are acquired until the maximum SNR for each pixel for the image is reached, or the maximum number of images is reached, or the maximum allotted time has been reached.
Essentially, the dark spots remain dark, the bright spots remain bright and the SNR is improved. The agar growth medium acts as the background for the digital images. A pixel in the image that is different in some way (i.e. a different intensity) from previous images indicates that either the colony is growing or there is contamination (e.g. dust) on the plate. This technique can be used to look at multiple plates at one time.
As the SNR is significantly improved, details can be revealed (with confidence) that could not be seen/trusted allowing for detection of very early small colonies in timed plate imaging. The systems and methods also provide images corresponding to an optimal exposure time that corresponds to specific and controlled saturation over the scene or object of interest.
Once the image acquisition at time t0 is complete, the process of iterative image acquisition is stopped for that time interval. When the predetermined time interval from t0 to t1 has elapsed, the iterative image acquisition process is repeated until the desired confidence in the integrity of the image so acquired has been obtained. The signal to noise ratio is inversely proportional to the standard deviation (i.e. SNR=gv′/standard deviation.) Therefore, an image acquisition that yields a maximum SNR per pixel (i.e. a minimum standard deviation per pixel) will provide an image with a high confidence associated with a time “Tx.”. For example, a high SNR image is obtained for a plate that has been incubated for four hours (T1=4 hours). Another high SNR image of the same plate is obtained after the plate has been incubated for an additional four hours (Tx=8 hours).
Once an image associated with a subsequent time (Tx+1) is obtained, that image (or at least selected pixels of the image associated with an object of interest) can be compared with the image associated with the previous time (Tx) to determine if the subsequent image provides evidence of microbial growth and to determine the further processing of the plate.
The system described herein is capable of being implemented in optical systems for imaging microbiology samples for the identification of microbes and the detection of microbial growth of such microbes. There are many such commercially available systems, which are not described in detail herein. One example is the BD Kiestra™ ReadA Compact intelligent incubation and imaging system (2nd generation BD Kiestra™ incubator). Such optical imaging platforms have been commercially available for many years (originally CamerA PrimerA from Kiestra® Lab Automation), and are therefore well known to one skilled in the art and not described in detail herein. In one embodiment, the system is a non-transitory computer-readable medium (e.g. a software program) that cooperates with an image acquisition device (e.g. a camera), that provides high quality imaging of an image by interacting to provide a maximum Signal to Noise Ratio (SNR) for every pixel in the image. For each pixel and each color (e.g. channel), the intensity and exposure time are recorded and the system then predicts the next best exposure time to improve on the SNR of the whole scene or objects of interest in the scene. One skilled in the art will appreciate that the multiple values obtained per pixel will depend upon the pixels and the imaging system. For example, in an RBG imaging system, values are obtained for each channel (i.e., red, green, or blue). In other systems, the values are obtained for different spectral bands or wavelengths.
Initially, the system is calibrated. Calibration of imaging systems such as the one described herein are well known to one skilled in the art. A variety of calibration approaches are known. Described herein are examples of system calibration that provide a baseline against which the captured images are evaluated. During calibration, calibration plates (e.g. plates with media but no colonies) are used and the system image acquisition is calibrated against the known input. A library of calibration values for each type of plate media is created, and the calibration data used for a particular plate is selected based on the media in the test plate. Both the system and the data are calibrated. For data calibration, SNR, Linearity, Black level, etc. are determined for each pixel of the captured image of the calibration plate. System calibration includes, but is not limited to, lens distortion, chromatic aberrations, spatial resolution, etc.
Following calibration, images of new plates are acquired. The pixels in the image are analyzed in real time in order to estimate the exposure time that will improve the SNR of the pixels with an SNR that is either below a predetermined threshold or for those pixels with the lowest SNR. Typical imaging systems only retain intensity values for the pixels in the image. In the embodiments described herein, intensity and exposure time are recorded for each pixel. The same pixel is imaged at different exposure times and intensity information is combined to generate high SNR data. From this information, an image can be generated for any specified exposure time, or the best exposure time can be extracted to control pixel saturation.
From a quantitative aspect, due to high SNR, the confidence on subtle intensity variations, on colors and texture is greatly improved allowing a better performance of subsequent object recognition or database comparison. The analysis is done on a grey scale with comparison both to the grey value of the pixel in a prior image (i.e. for image N, the value of the pixel in image N−1). In addition to comparison of the same pixel grey value in the prior image, the pixel grey value of adjacent pixels is also compared with the pixel grey value to determine differences (e.g. the colony/media interface).
SNR of dark of colored objects is uneven in the different channels or very poor when compared to bright objects. In order to improve on this, the system and method described herein deploy an image detection module in which object detection is based upon contrast, SNR, and size/resolution. SNR is improved in both dark and bright regions. Standard deviation is decreased and therefore local contrast is made as significant in bright and dark regions. The goal here is to provide a system that will detect even subtle differences between the x and x+1 time interval images of a plate suspected to contain a growing culture. Those differences must be distinguishable from the “noise” that results from signal variations but not changes in the sample attributable to a growing culture. The systems and methods described herein are especially valuable when objects of interest in the scene may exhibit very different colors and intensities (reflectance or absorbance).
Specifically, the system and method provide automatic adaptation of the dynamic range (extended dynamic range) to accommodate the scene. The system and method provides both the minimum exposure time for saturating the brightest pixel and the maximum exposure time for saturating the darkest pixel (within physical and electronic constraints of the image acquisition equipment (e.g. the camera)). The system and method provide for faster convergence towards a minimum SNR per pixel when compared to image averaging. The system and method provide for improved confidence on colors. Specifically, the SNR for red, green and blue values are homogenized regardless of intensity disparities in red, green, and blue colors.
Intensity confidence intervals are known per pixel, which is very valuable for any subsequent classification effort. The SNR optimization provided by the system and method can be supervised (weighting of detected objects of interest to compute next image acquisition's exposure times).
Intensity, exposure time and estimate SNR are determined from calibration and physics theory for each pixel. To further improve on image quality, chromatic aberration and lens distortion are also calibrated and corrected to render an image free of such defects.
The system and method can control pixel SNR for the image either in an automatic mode or a supervised mode where certain portions of the image are of particular interest. In the automatic mode, the whole image of the scene is optimized, and all pixels are treated equally. In the supervised mode, the scene is further analyzed when acquired to detect the objects of interest. SNR maximization favors the objects of interest regions.
In automatic mode, the image acquisition will stop after the first of the three following conditions occurs: (1) a minimum level of SNR is reached for each and every pixel; (2) a predetermined number of acquisitions have been performed on this scene; or (3) the maximum allowed acquisition time has been reached.
Referring to
The image acquisition module 120 is in communication with the system calibration module 110. The image acquisition module captures an image of the object under analysis. The image is captured using exposure time and other criteria determined in a manner described in detail hereinbelow in the context of specific examples. As discussed above, image acquisition proceeds in an iterative manner until a predetermined SNR threshold is met for each pixel or until a predetermined number of images have been captured. The image presentation module 130 provides the image with the best dynamic range (i.e. the brightest non-saturating pixels that are just below saturation), either globally (i.e. in automatic mode) or restricted to the objects of interest (i.e. in supervised mode).
Referring to
More details about the image acquisition module are described in
Image acquisition occurs at times t0, t1, . . . tx. At each time, an image is acquired through a series of N image acquisitions. The series of N image acquisitions iterates to a SNR for the acquired image that correlates with high confidence in image integrity.
Image acquisition at a given time (e.g. to) and update is illustrated in
According to one embodiment, the pixels are updated as follows. Grey value, reference exposure time and signal to noise ratio represent the information stored for each illumination configuration (top, side, bottom, or a mixture of them) per plate (image object). This information is updated after each new acquisition. To start with, this information is updated using the first image acquisition (N=1).
Grey value, reference exposure time and signal to noise ratio represent the information stored for each illumination configuration (top, side, bottom, or a mixture of them) per plate. This information is updated after each new acquisition. To start with this information is initialized according to the first image acquisition (N=1). In one embodiment, gvx,y,1 is a grey value (gv) at image position (x,y) corresponding to the 1st image capture (N=1) of the plate using exposure time E1 and respective Signal to Noise Ratio (SNRgv). In this embodiment:
The black level is noisy and the iterative image acquisition process obtains an image that is “less noisy” (i.e. an image with a higher confidence level). The black value is a default value that is not recalculated during image acquisition. The black value is a function of exposure time.
SNR=0 when a pixel is saturating for a given exposure time (hence no improvement in SNR) and light source intensity. Only values from the non-saturated pixels are updated.
N=1: The initial exposure time is the best known default exposure time (a priori), or an arbitrary value
This is determined from calibration for the particular plate and media for the new plate under analysis.
Grey value, reference exposure time and signal to noise ratio are updated after each new image acquisition (i.e. N=2, 3, 4 . . . N) according to the following embodiment. Grey value gvx,y,N for image position (x,y) corresponds to the Nth image capture of the plate using exposure time EN and respective Signal to Noise Ratio (SNRx,y,N). In this embodiment:
Therefore, the updated SNR for a pixel in the Nth image acquisition is the square root of the squared updated signal to noise ratio of the prior image acquisition and the squared signal to noise ratio of the current image acquisition. Each acquisition provides an updated value (e.g. E′x,y,N) for each pixel. That updated value is then used to calculate the updated value for the next image acquisition. SNR=0 for a pixel when a pixel is saturating for a given exposure time and light source intensity. Only the non-saturated pixels are updated. The Nth exposure time corresponds to a supervised optimization the goal of which is to maximize the SNR for the objects of interest. The object of interest can be the entire plate, the colonies, a portion of the plate, or the whole image.
After updating the image data with a new acquisition, the acquisition system is able to propose the best next acquisition time that would maximize SNR according to environmental constraints (minimum required SNR, saturation constraints, maximum allowed acquisition time, region of interest). In embodiments where image acquisition is supervised: x,y∈object implies that in supervised mode, the object pixels only are considered for the evaluations. In those embodiments where image acquisition is not supervised, the default object is the entire image.
With reference to
Referring to
In one embodiment specifically, the acquired image is analyzed pixel by pixel for saturated pixels. If EN results in pixel saturation that exceeds predetermined limits, a lower value for EN+1 is selected. For example, if the minimal exposure time was not acquired yet and the % of saturated pixels (gv′x,y,N,E′
A new image is acquired at the new exposure time. For the new image, secondary checked constraints are the minimum desired SNR per pixel (this is the lower SNR threshold) and overall acquisition time (or Nmax) allowed for this image. If the overall acquisition time for this scene has reached the time limit or if every updated SNR for each pixel is such that SNR′x,y,N≥MinSNR, then the image data is considered acceptable and the acquisition of the scene ends for the time interval (e.g. t0). When image acquisition commences at time tx (e.g. time t1) the best exposure time ENfinal) leading to sub-saturation conditions from the previous acquisition (e.g. at time t0) exposure is used as the initial value for E. The process for image acquisition at tx is otherwise identical to the process at time t0.
If the saturation constraint is lifted (no significant saturation) the next optimal exposure time is determined and investigated. First, the exposure time boundary limits are computed over the region of interest. These exposure time boundaries are: i) the exposure time to saturate the brightest pixels; and ii) the exposure time to saturate the darkest pixels.
The exposure time for saturating the brightest non-saturated pixels, Emin is determined from the grey value gvmax that corresponds to the absolute maximum intensity and E′gv
The exposure time for saturating the darkest pixels, Emax is determined from the grey value gvmin that corresponds to the absolute minimum intensity and E′gv
The next optimal exposure time is chosen among all candidate exposure times within Emax and Emin by simulation. Specifically, an exposure time is determined by simulation that will maximize the updated mean SNR (for all pixels below the minimum signal to noise ratio threshold), after adding the simulated image at tested exposure time Etest,N+1. The simulated image at Etest,N+1 is generated as follows (for each and every pixel).
Grey value gv′x,y,N,E′
After updating this value with a value for the pixel from the simulated image at time point Etest,N+1 image, the SNR for this (x,y) pixel will be:
SNR′x,y,N+1=√{square root over (SNR′x,y,N2+SNRx,y,N+12)}
The next best exposure time Ebest,N+1 is then determined by:
Ebest,N+1=Etest,N+1∈[Emin,Emax];
with Σx,y∈objectE
If image acquisition and analysis is supervised x,y∈object the SNR is integrated for the objects of interest only. In automatic mode the object is the whole image.
Once the image has been obtained as described above it is compared with an image of the plate that has been incubated for a different amount of time. For example, an image of a plate is obtained as described herein after the plate has been incubated for four hours (T1=4). After four or more hours, another image of the plate is obtained as described above (Tx=8 hrs). The high SNR image obtained at Tx+1 can then be compared with the high SNR image at Tx. Changes in the two images are evaluated to ascertain evidence of microbial growth. Decisions on further processing (e.g. plate is positive, plate is negative, plate requires further incubation) are based on this comparison.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims.
The present application is a continuation of U.S. application Ser. No. 15/115,715, filed Aug. 1, 2016, which application is a national phase entry under 35 U.S.C. § 371 of International Application No. PCT/EP2015/052017 filed Jan. 30, 2015, published in English, which claims priority from U.S. Provisional Application No. 61/933,426, filed Jan. 30, 2014, all disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4700298 | Palcic | Oct 1987 | A |
4724215 | Farber et al. | Feb 1988 | A |
5403722 | Floeder et al. | Apr 1995 | A |
5510246 | Morgan | Apr 1996 | A |
5694478 | Braier et al. | Dec 1997 | A |
5723308 | Mach et al. | Mar 1998 | A |
5976892 | Bisconte | Nov 1999 | A |
6122396 | King et al. | Sep 2000 | A |
6134354 | Lee et al. | Oct 2000 | A |
6345115 | Ramm et al. | Feb 2002 | B1 |
6385272 | Takahashi | May 2002 | B1 |
6453060 | Riley et al. | Sep 2002 | B1 |
6605446 | Eden | Aug 2003 | B2 |
6718077 | Ferreira et al. | Apr 2004 | B1 |
7060955 | Wang | Jun 2006 | B1 |
7065236 | Marcelpoil et al. | Jun 2006 | B2 |
7106889 | Mahers et al. | Sep 2006 | B1 |
7117098 | Dunlay et al. | Oct 2006 | B1 |
7298886 | Plumb et al. | Nov 2007 | B2 |
7319031 | Vent et al. | Jan 2008 | B2 |
7341841 | Metzger et al. | Mar 2008 | B2 |
7351574 | Vent | Apr 2008 | B2 |
7496225 | Graessle et al. | Feb 2009 | B2 |
7582415 | Straus | Sep 2009 | B2 |
7596251 | Affleck et al. | Sep 2009 | B2 |
7623728 | Avinash et al. | Nov 2009 | B2 |
7666355 | Alavie | Feb 2010 | B2 |
7732743 | Buchin | Jun 2010 | B1 |
7738689 | Plumb et al. | Jun 2010 | B2 |
7796815 | Muschler et al. | Sep 2010 | B2 |
7863552 | Cartildge et al. | Jan 2011 | B2 |
7865008 | Graessle et al. | Jan 2011 | B2 |
7884869 | Shurboff et al. | Feb 2011 | B2 |
7957575 | Plumb et al. | Jun 2011 | B2 |
7978258 | Christiansen et al. | Jul 2011 | B2 |
7989209 | Marcelpoil et al. | Aug 2011 | B2 |
8094916 | Graessle et al. | Jan 2012 | B2 |
8131477 | Li et al. | Mar 2012 | B2 |
8260026 | Plumb et al. | Sep 2012 | B2 |
8417013 | Bolea et al. | Apr 2013 | B2 |
8428331 | Dimarzio et al. | Apr 2013 | B2 |
8570370 | McCollum et al. | Oct 2013 | B2 |
8588505 | Bolea | Nov 2013 | B2 |
8759080 | Graessle et al. | Jun 2014 | B2 |
8831326 | Nishida et al. | Sep 2014 | B2 |
8840840 | Bolea | Sep 2014 | B2 |
8855397 | Goldberg et al. | Oct 2014 | B2 |
8895255 | Goldberg et al. | Nov 2014 | B1 |
8896706 | van den Hengel et al. | Nov 2014 | B2 |
9012209 | Eden et al. | Apr 2015 | B2 |
9042967 | Dacosta et al. | May 2015 | B2 |
9292729 | Guthrie et al. | Mar 2016 | B2 |
9359631 | Dupoy et al. | Jun 2016 | B2 |
9378545 | Bise et al. | Jun 2016 | B2 |
9400242 | Allano et al. | Jul 2016 | B2 |
9470624 | Guthrie et al. | Oct 2016 | B2 |
9567621 | Robinson et al. | Feb 2017 | B2 |
9576181 | Allano et al. | Feb 2017 | B2 |
9606046 | Decaux et al. | Mar 2017 | B2 |
20020001402 | Berliner | Jan 2002 | A1 |
20020050988 | Petrov et al. | May 2002 | A1 |
20020196964 | Stone et al. | Dec 2002 | A1 |
20030022761 | Brereton | Jan 2003 | A1 |
20030091221 | Marcelpoil et al. | May 2003 | A1 |
20030138140 | Marcelpoil et al. | Jul 2003 | A1 |
20040253660 | Gibbs et al. | Dec 2004 | A1 |
20050254722 | Fattal et al. | Nov 2005 | A1 |
20060173266 | Pawluczyk et al. | Aug 2006 | A1 |
20070177149 | Aronkyto | Aug 2007 | A1 |
20080297597 | Inomata et al. | Dec 2008 | A1 |
20100061618 | Marcelpoil et al. | Mar 2010 | A1 |
20100067775 | Marcelpoil et al. | Mar 2010 | A1 |
20100136549 | Christiansen | Jun 2010 | A1 |
20120013603 | Liu et al. | Jan 2012 | A1 |
20120134603 | Pang et al. | May 2012 | A1 |
20120148142 | Ortyn et al. | Jun 2012 | A1 |
20130051700 | Kensei | Feb 2013 | A1 |
20130252271 | Ullery | Sep 2013 | A1 |
20140161330 | Allano | Jun 2014 | A1 |
20140219553 | Van Den Hengel | Aug 2014 | A1 |
20140278136 | Shamsheyeva et al. | Sep 2014 | A1 |
20150225684 | Spicer et al. | Aug 2015 | A1 |
20150268163 | Dupoy et al. | Sep 2015 | A1 |
20150339513 | Bolea | Nov 2015 | A1 |
20150353983 | Drazek et al. | Dec 2015 | A1 |
20160060676 | Lei | Mar 2016 | A1 |
20160083686 | Triva | Mar 2016 | A1 |
20160093033 | Allano et al. | Mar 2016 | A1 |
20160098840 | Allano et al. | Apr 2016 | A1 |
20160328844 | Triva | Nov 2016 | A1 |
Number | Date | Country |
---|---|---|
102590087 | Jul 2012 | CN |
203385649 | Jan 2014 | CN |
1065496 | Jan 2001 | EP |
2520923 | Nov 2012 | EP |
2578693 | Apr 2013 | EP |
1163362 | Jul 2013 | EP |
2430461 | Mar 2014 | EP |
2287284 | Dec 2016 | EP |
H05-340816 | Dec 1993 | JP |
2005-504276 | Feb 2005 | JP |
2005-509140 | Apr 2005 | JP |
2012529065 | Nov 2012 | JP |
2013-066142 | Apr 2013 | JP |
WO 03025554 | Mar 2003 | WO |
2012152768 | Nov 2012 | WO |
20120152769 | Nov 2012 | WO |
2014098994 | Jun 2014 | WO |
WO-2015162364 | Oct 2015 | WO |
WO-2015173490 | Nov 2015 | WO |
2016001555 | Jan 2016 | WO |
2016011534 | Jan 2016 | WO |
2016083744 | Jun 2016 | WO |
2016097092 | Jun 2016 | WO |
2016172388 | Oct 2016 | WO |
2017006055 | Jan 2017 | WO |
Entry |
---|
Intemational Search Report and Written Opinion for Application No. PCT/EP2015/052017 dated Jun. 15, 2015. |
Ntemational Preliminary Examination Report on Patentability for International Application No. PCT/US2006/018516, dated Sep. 27, 2007 (pp. 1-14 ), including replacement claims (pp. 35-43). |
Office Action for Canadian Application No. 2,607,609; dated Oct. 23, 2012. |
Office Action for Japanese Application No. 2008-511426 dated Oct. 2, 2012. |
Office Action for Korean Application No. 10-2007-7028983 dated Jul. 24, 2012. |
Office Action for Mexican Application No. MX/a/2007/0I40I6; dated Nov. 27, 2012. |
Office Action for U.S. Appl. No. 12/620,670 dated Jan. 4, 2012. |
Office Action for U.S. Appl. No. 12/620,670, dated Aug. 2, 2012. |
Office Action for U.S. Appl. No. 12/620,701; dated Oct. 26, 2011. |
Chinese Search Report issued in corresponding CN Application No. 2015800064078, pp. 2. |
Goesele, M., et al., “Color Calibrated High Dynamic Range Imaging with ICC Profiles”, Proceedings of the 9th IS&T Color Imaging Conference, Scottsdale, Arizona, Nov. 6, 2001., pp. 286-290. |
BD Kiestra Total Lab Automation, Proven Systems, Proven Results, BD Diagnostics, 10/12. |
Starizona's Guide to CCD Imaging, Oct. 30, 2013, pp. 1-8. |
Zhou, R. et al, “A Multiple Wavelength Algorithm in Color Image Analysis and its Applications in Stain Decomposition in Microscopy Images,” Medical Physics, vol. 23(12)., 1996, pp. 1977-1986. |
Ancin, H., et al., “Advances in Automated 3-D Image Analysis of Cell Populations Imaged by Confocal Microscopy”, Cytometry, vol. 25, No. 3., 1996, pp. 221-234. |
Lawless et al., Colonyzer: automated quantification of micro--0rganism growth characteristics on solid agar, BMC Bioinformatics 2010, 11:287, http://biomedcentral.com/1471-2105/11/287., 2010, 12 pages. |
CamerA PrimerA Manual Interactive (MI) User Manual, Kiestra Lab Automation, Aug. 23, 2007, 34 pages. |
Madden, B.C., “Extended Intensity Range Imaging”, Tech. Report, University of Pennsylvania, 3RASP Laboratory., Dec. 17, 1993, pp. 1-19. |
Malpica, N. et al., “Automated Nuclear Segmentation in Fluorescence Microscopy”, Science, Technology and Education of Microscopy: An Overview., Jan. 2005, pp. 614-621. |
Malpica, N., et al., “Applying Watershed Algorithms to the Segmentation of Clustered Nuclei”, Cytometry, vol. 25, No. 3., Nov. 1, 1996, pp. 221-234. |
Allers, Elke , et al., Single-Cell and Population Level Viral Infection Dynamics Revealed by PhageFISH, A Method to Visualize Intracellular and Free Viruses, Environmental Microbiology, 15(8), 2013, pp. 2306-2318. |
Bennett, Eric , et al., Video Enhancement Using Per-Pixel Virtual Exposure, The Univ. of North Carolina at Chapel Hill., 8 pages. |
Number | Date | Country | |
---|---|---|---|
20190116303 A1 | Apr 2019 | US |
Number | Date | Country | |
---|---|---|---|
61933426 | Jan 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15115715 | US | |
Child | 16218582 | US |