Embodiments of this disclosure relate to methods and apparatus for characterizing a specimen in an automated diagnostic analysis system.
Automated diagnostic analysis systems may conduct assays or clinical chemistry analyses using one or more reagents to identify an analyte or other constituent in a bio-liquid specimen (e.g., a serum or plasma portion of a centrifuged whole blood sample). Improvements in automated testing technology have been accompanied by corresponding advances in pre-analytical specimen preparation and handling operations such as sorting, batch preparation, centrifuging of specimen containers to separate specimen components, cap removal to facilitate fluid access, pre-screening such as for HILN (Hemolysis, Icterus, and/or Lipemia, or Normal) determination, and the like, by automated specimen preparation systems referred to as Laboratory Automation Systems (LASs). LASs may also automatically transport specimens in specimen containers to a number of specimen processing stations so various operations (e.g., pre-analytical and/or analytical testing) can be performed thereon.
The automated pre-screening may be performed by an automated machine-vision inspection apparatus. The HILN pre-screening involves automated detection of an interferent, such as H, I, and/or L, in a serum or plasma portion in a fractionated whole blood specimen (e.g., a centrifuged specimen). The pre-screening may also involve determining a volume of one or more constituents (e.g., of the serum or plasma portion or settled red blood cell portion), tube type and/or size of a specimen container, whether the tube is capped, the cap type, the status of labels on the specimen container, and other determinations.
In a first aspect, a method of identifying objects of a specimen container is provided. The method includes capturing one or more images of the specimen container, the one or more images including one or more objects of the specimen container and specimen, the capturing generating pixel data from a plurality of pixels; identifying one or more selected objects from the one or more objects using one or more neural networks; displaying an image of the specimen container; and displaying, on the image of the specimen container, one or more locations of pixels used by the one or more neural networks to identify the one or more selected objects.
In a second aspect, a quality check module is provided. The quality check module includes one or more image capture devices operative to capture one or more images from one or more viewpoints of a specimen container, wherein capturing one or more images generates pixel data from a plurality of pixels, and a computer coupled to the one or more image capture devices. The computer is configured and operable (capable of being operated) to: identify one or more selected objects from one or more objects of the specimen container using one or more neural networks; display an image of the specimen container; and display, on the image of the specimen container, one or more locations of pixels used by the one or more neural networks to identify the one or more selected objects.
In another aspect, a specimen testing apparatus is provided. The specimen testing apparatus includes a track; a carrier moveable on the track and configured to contain a specimen container containing a serum or plasma portion of a specimen therein; a plurality of image capture devices arranged around the track and operative to capture one or more images from one or more viewpoints of the specimen container and the serum or plasma portion of the specimen, wherein capturing one or more images generates pixel data including a plurality of pixels; and a computer coupled to the plurality of image capture devices. The computer is configured and operative to: identify one or more selected objects from one or more objects of the specimen container using one or more neural networks; display an image of the specimen container; and display, on the image of the specimen container, one or more locations of pixels used by the one or more neural networks to identify the one or more selected objects.
The drawings, described below, are for illustrative purposes and are not necessarily drawn to scale. Accordingly, the drawings and descriptions are to be regarded as illustrative in nature, and not as restrictive. The drawings are not intended to limit the scope of the disclosure in any way.
Automated diagnostic analysis systems may conduct assays or clinical chemistry analyses using one or more reagents to identify an analyte or other constituent in a specimen such as urine, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, and the like. Such specimens are usually contained within specimen containers (e.g., specimen collection tubes). The testing reactions generate various changes that may be read and/or manipulated to determine a concentration of an analyte or other constituent present in the specimen.
For certain tests, a biological liquid such as a serum or plasma portion (obtained from whole blood by centrifugation) may be analyzed. When the specimen is whole blood, a gel separator may be added to the specimen container to aid in the separation of a settled blood portion from the serum or plasma portion. A void, such as a vacuum or air gap, may be located between the serum or plasma portion and a top of the specimen container. The specimen container may be a tube and may or may not be capped by a specific type of cap having a specific color or other cap identifier. In addition, the specimen container may be located in a specimen carrier that transports the specimen container to various locations, such as throughout the automated diagnostic analysis system.
During a pre-processing operation, the specimen container may be analyzed to determine certain properties of the specimen container and/or the specimen located therein. The pre-processing may include machine-vision inspection that uses optics and artificial intelligence to determine the properties of the specimen container and/or the specimen located therein. One or more image capture devices may capture one or more images of the specimen container and generate pixel data (e.g. image data) representative of an image of the specimen container. A computer receives the pixel data and may segment the image. The segmentation may involve classifying the pixels in to one or more classes that may correspond to objects contained in the image, such as the specimen (e.g., serum or plasma portion, settled blood portion, gel separator), specimen container (e.g., tube) and/or a cap located thereon. The classes of pixels may correspond to different objects of the specimen container. For example, a first class of pixels may be used by one or more neural networks to identify a cap of the specimen container and a second class of pixels may be used by one or more neural networks to identify a serum or plasma portion of the specimen.
A neural network (e.g., a trained neural network), such as a convolutional neural network (CNN) or other suitable neural network, may analyze the different classes of pixels to determine the properties of the specimen container and/or the specimen located therein. With regard to the examples described above, the neural network may analyze the first class of pixels and determine whether the specimen container is capped. If so, the neural network may further determine the color of the cap and/or the type of cap. The neural network may analyze the second class of pixels and determine that they are a serum or plasma portion of the specimen. The neural network may further analyze the second class of pixels to determine whether the serum or plasma portion contains hemolysis, icterus, and/or lipemia, or is normal. Based on the analysis of the pixel data, the machine-vision inspection apparatus can output certain information identifying the specimen and/or specimen container properties.
A user, such as a laboratory technician, may receive the information generated by the machine-vision inspection apparatus. In conventional systems, the automated machine inspection apparatus are black box structures in that they only provide output information to users about the determinations made by the automated inspection apparatus and do not provide any criteria as to how the determinations were derived. Thus, users may not be certain as to how determinations by automated machine inspection apparatus are made. For example, conventional machine-vision inspection apparatus only output the specimen and/or specimen container properties and not information as to the analysis used to derive the properties. Thus, the user may not be certain that the determinations reached by the machine-vision inspection are, in fact, correct.
In view of these deficiencies, the machine-vision inspection apparatus and methods disclosed herein provide information as to how specimen and/or specimen container properties were determined. Thus, a user may be certain that the machine-vision inspection apparatus analyzed a correct portion of an image when it determined the properties of the specimen and/or the specimen container. The machine-vision inspection apparatus and methods disclosed herein may provide visual information regarding the locations of the classes of pixels used to determine the properties of the specimen and/or the specimen container. For example, the machine-vision inspection apparatus may display an image at least partially representing the specimen container. The displayed image may include delineated regions that correspond to regions of identified objects in the captured image where the different classes of pixels are located. In some embodiments, the displayed image may include different colors of pixels, different intensities of pixels, or other suitable markings (e.g., dotted or colored borders, hatching, shading, and the like) corresponding to locations of pixels in the original image that were used by the apparatus to identify certain objects in the original image.
In some embodiments, the machine-vision inspection apparatus may generate an activation map, which is an image that may provide a score as to how important each pixel in an image was toward making a determination, such as identifying an object. In some embodiments, an activation map may provide locations of pixels used during identification. Activation maps may be referred to as saliency maps, learned feature visualizations, occlusion maps, and class activation maps. In some embodiments, the brightness of pixels represented in an activation map are dependent on the usefulness of the pixels in making the determination. For example, the brighter pixels in an activation map may be pixels providing more useful information and dimmer pixels may be pixels providing less useful information or no information. In some embodiments, an image of the specimen container overlaid with an activation map may be displayed for a user.
Referring to the above examples, in some embodiments, the displayed image may include a first highlighted or otherwise delineated region indicating where the first class of pixels are located. In a first example, the first class of pixels are located in a cap region of the specimen container. Accordingly, a cap region of the displayed image may be delineated or otherwise distinguished to show that the first class of pixels are in the cap region and the cap properties were derived from those particular pixels. Another displayed image may include a second delineated region indicating a physical location of a second class of pixels. In this example, the second class of pixels can be located in the serum or plasma region of the specimen. Accordingly, a serum or plasma portion of the displayed image may be delineated to show that the second class of pixels are in the serum or plasma region of the displayed image. Delineated portions of the image are portions of the image that are readily distinguishable from other portions of the image by some feature highlighting that object. Delineating respective portions (e.g., a delineated object) may include coloring or darkening a portion of an image such as by providing a darkened (e.g., bolded) or colored border, providing a fill of the portion of the image such as a hatched fill pattern or other fill pattern, or displaying only the portion of the image, providing a unique color and/or using a unique shading and the like. Delineation may be accomplished by superimposing any suitable graphic on the displayed image or replacing the portion with a graphic. The delineating respective portions may flash or otherwise change in intensity in some embodiments.
In other embodiments, the delineated image may include the activation map overlaid onto the original image or a representation of the original image. Different pixel intensities, such as brighter pixels may indicate pixels that were more useful in making the determinations and/or identifications. In some embodiments, the brighter pixels or regions consisting of brighter pixels constitute the delineated portions of the image.
The delineated portion(s) of the displayed image provides the user with confidence that the machine-vision inspection apparatus has analyzed the correct portion of the image when it made a determination regarding a property and/or location of an object of, or contained in, the specimen container. For example, the displayed image may delineate the cap region of the specimen container when the machine-vision inspection apparatus provides information regarding the cap, such as the color thereof and possibly what that color denotes, and/or the cap type. The user is then confident that the machine-vision inspection apparatus did not analyze other regions, such as the serum or plasma region, air gap, or another geometric feature, when it identified the cap.
Further details of characterization and visual verification methods, apparatus, systems, and quality check modules configured to carry out the characterization and visual verification methods, as well as specimen testing apparatus including one or more quality check modules will be further described with reference to
Reference is now made to
In more detail, the specimen testing apparatus 100 may include a base 120 (e.g., a frame, floor, or other structure) upon which a track 122 may be mounted or supported. The track 122 may be a railed track (e.g., a mono rail or a multiple rail), a collection of conveyor belts, conveyor chains, moveable platforms, or any other suitable type of conveyance mechanism. The track 122 may be circular or any other suitable shape and may be a closed track (e.g., endless track) in some embodiments. The track 122 may, in operation, transport individual ones of the specimen containers 104 to various locations spaced about the track 122 in carriers 124.
The carriers 124 may be passive, non-motored pucks that may be configured to carry specimen containers 104 on the track 122 or optionally, an automated carrier including an onboard drive motor, such as a linear motor that is programmed to move about the track 122 and stop at pre-programmed locations. Other configurations of the carriers 124 may be used. In some embodiments, the carriers 124 may leave from the loading area 110 after being offloaded from the one or more racks 108. The loading area 110 may serve a dual function of also allowing reloading of the specimen containers 104 from the carriers 124 to the loading area 110 after pre-screening and/or analysis is completed. Otherwise, the specimen containers 104 may be discarded.
A robot 126 may be provided at the loading area 110 and may be configured to grasp the specimen containers 104 from the one or more racks 108 and load the specimen containers 104 onto the carriers 124, such as onto an input lane of the track 122. The robot 126 may also be configured to reload specimen containers 104 from the carriers 124 to the one or more racks 108 or otherwise discard the specimen containers 104. The robot 126 may include one or more (e.g., at least two) robot arms or components capable of X (lateral) and Z (vertical—out of the page, as shown), or X, Y, and Z, or r (radial) and Z (vertical), theta (rotational) and Z (vertical) motion. The robot 126 may be a gantry robot, an articulated robot, an R-theta robot, a theta-Z robot, or other suitable robot wherein the robot 126 may be equipped with robotic gripper fingers oriented, sized, and configured to pick up and place the specimen containers 104.
Upon being loaded onto the track 122, the specimen containers 104 carried by carriers 124 may progress to one or more pre-processing modules, as a pre-processing module 130. For example, the pre-processing module 130 may be an automated centrifuge configured to carry out fractionation of the specimen 106. Carriers 124 carrying specimen containers 104 may be diverted to the pre-processing module 130 by an inflow lane or other suitable robot. After being centrifuged, the specimen containers 104 may exit on an outflow lane, or otherwise be removed by a robot, and continue along the track 122. In the depicted embodiment, the specimen containers 104 in carriers 124 may next be transported to the quality check module 102 to carry out pre-screening, as is further described herein. Additional station(s) may be provided at one or more locations on or along the track 122. The additional station(s) may include a de-capping station, aliquoting station, one or more additional pre-processing modules 130, one or more additional quality check modules 102, and the like.
The specimen testing apparatus 100 may include a plurality of sensors 132 at one or more locations around the track 122. The sensors 132 may be used to detect locations of specimen containers 104 on the track 122 by means of reading identification information 234i (
The pre-processing stations and the analyzers 112, 114, and 116 may be equipped with robotic mechanisms and/or inflow lanes configured to remove the carriers 124 from the track 122, and with robotic mechanisms and/or outflow lanes configured to reenter the carriers 124 to the track 122.
The specimen testing apparatus 100 may be controlled by the computer 136, which may be a microprocessor-based central processing unit (CPU), having a suitable memory and suitable conditioning electronics and drivers for operating the various system components. The computer 136 may be housed as part of, or separate from, the base 120 of the specimen testing apparatus 100. The computer 136 may operate to control movement of the carriers 124 to and from the loading area 110, motion about the track 122, motion to and from the pre-processing module 130 as well as operation of the pre-processing module 130 (e.g., centrifuge), motion to and from the quality check module 102 as well as operation of the quality check module 102, and motion to and from each analyzer 112, 114, 116 as well as operation of each analyzer 112, 114, 116 for carrying out the various types of testing (e.g., assay or clinical chemistry). The computer 136 may also perform other functions, such as executing one or more neural networks as described herein. In some embodiments, separate computers may be associated with each of the components and they all may interface with one another through a local server and/or a communication link, such as an Ethernet.
Reference is now made to
The quality check module 102 may include one or more image capture devices 142A, 142B, 142C. Three image capture devices 142A-142C are shown in
Each of the image capture devices 142A-142C may be configured and operable to capture lateral images of at least a portion of the specimen container 104, and at least a portion of the specimen 106 located therein. The image capture devices 142A-142C may generate image data or pixel data representative of the captured images. In the embodiment shown, the plurality of image capture devices 142A-142C may be configured to capture lateral images of the specimen container 104 and/or the specimen 106 at an imaging location 144 from the multiple viewpoints 1-3. The viewpoints 1-3 may be arranged so that they are approximately equally spaced from one another, such as about 120° from one another, as shown. As depicted, the image capture devices 142A-142C may be arranged around the track 122 on which the specimen container 104 is transported. In this way, the images of the specimen 106 in the specimen container 104 may be captured while the specimen container 104 is residing in a carrier 124 at the imaging location 144. The field of view of the multiple images obtained by the image capture devices 142A-142C may overlap slightly in a circumferential extent. Other arrangements of the plurality of image capture devices 142A-142C may be used.
In one or more embodiments, the carrier 124 may be stopped at a pre-determined location in the quality check module 102, such as at the imaging location 144. At his location, normal vectors from each of the image capture devices 142A-142C intersect each other. A gate or a linear motor (not shown) of the carrier 124 may be provided to stop the carrier 124 at the imaging location 144, so that multiple images may be captured thereat. In some embodiments, such as where there is a gate at the quality check module 102, one or more sensors 132 may be used to determine the presence of the carrier 124 at the quality check module 102.
In some embodiments, the quality check module 102 may include a housing 146 that may at least partially surround or cover the track 122 to minimize outside lighting influences. The specimen container 104 may be located inside the housing 146 during the image-capturing sequences. The housing 146 may include one or more openings and/or doors 146D to allow the carriers 124 to enter into and/or exit from the housing 146. In some embodiments, a ceiling of the housing 146 may include an opening 1460 to allow a specimen container 104 to be loaded into the carrier 124 by a robot (not shown) from above, such as when the quality check module 102 is located off the track 122.
The image capture devices 142A-142C may be provided in close proximity to and trained or focused to capture an image window at the imaging location 144, wherein the image window is an area including an expected location of the specimen container 104. Thus, the specimen container 104 may be stopped so that it is approximately located in a center of the image window in some embodiments, prior to image capture. For example, the image capture devices 142A-142C may capture a part or all of a label 134 affixed to the specimen container 104 and part or all of the specimen 106 located therein. In some instances, part of at least one of the viewpoints 1-3 may be partially occluded by the label 134. In some instances, one or more of the viewpoints 1-3 may be fully occluded, i.e., no clear view of the serum or plasma portion 206SP (
In operation of the quality check module 102, each image may be captured in response to a triggering signal provided in communication lines 148A, 148B, 148C that may be transmitted by the computer 136. Each of the captured images may be processed by the computer 136 according to one or more embodiments described herein. The computer 136 may be coupled to a display 150 that may display images, including computer-generated images of the specimen container 104 and the specimen 106. In some embodiments, high dynamic range (HDR) processing may be used to capture and process the image data from the captured images. For example, multiple images of the specimen 106 may be captured at the quality check module 102 at multiple different exposures (e.g., at different exposure times), while being sequentially illuminated at one or more different spectra. For example, each image capture device 142A-142C may capture 4-8 images of the specimen container 104 including a serum or plasma portion 106SP (
In some embodiments, capturing the multiple spectral images may be accomplished using different light sources 152A, 152B, and 152C emitting different spectral illumination. The light sources 152A-152C may back light the specimen container 104 (as shown). A light diffuser (not shown) may be used in conjunction with the light sources 152A-152C in some embodiments. The multiple different spectra light sources 152A-152C may be red, green, blue (RGB) light sources, such as light-emitting diodes (LEDs) emitting nominal wavelengths of 634 nm+/−35 nm (red), 537 nm+/−35 nm (green), and 455 nm+/−35 nm (blue). In other embodiments, the light sources 152A-152C may be white light sources. In cases where the label 134 obscures multiple viewpoints, infrared (IR) backlighting or near infrared (NIR) backlighting may be used. Furthermore, RGB light sources may be used in some instances even when label occlusion is present. In other embodiments, the light sources 152A-152C may emit one or more spectra having a nominal wavelength between about 700 nm and about 1200 nm.
By way of a non-limiting example, to capture images at a first wavelength, three light sources 152A-152C emitting red light (wavelength of about 634 nm+/−35 nm) may be used to sequentially illuminate the specimen 106 from three lateral locations. The red illumination by the light sources 152A-152C may occur as the multiple images (e.g., 4-8 images or more) at different exposure times are captured by each image capture device 142A-142C from each of the three viewpoints 1-3. In some embodiments, the exposure times may be between about 0.1 ms and 256 ms. Other exposure times may be used. In some embodiments, each of the respective images for each of the image capture devices 142A-142C may be captured sequentially, for example. Thus, for each viewpoint 1-3, a group of images may be sequentially captured that have red spectral backlit illumination and multiple exposures (e.g., 4-8 exposures) at different exposure times. The images may be captured in a round robin fashion, for example, where all images from viewpoint 1 are captured followed sequentially by viewpoints 2 and 3.
Once the red backlit illuminated images are captured in the embodiment of
In some embodiments, the multiple images captured at multiple exposures (e.g., exposure times) for each respective wavelength spectra may be captured in rapid succession, such that the entire collection of backlit images for the specimen container 104 and specimen 106 from multiple viewpoints 1-3 may be captured in less than a few seconds, for example. In some embodiments, four different exposure images for each wavelength at the three viewpoints 1-3 using the image capture devices 142A-142C and back lighting with light sources 152A-152C, which may be RGB light sources, results in 4 images×3 spectra×3 image capture devices=36 images. In another embodiment, four different exposure images for each wavelength at the three viewpoints 1-3 using the image capture devices 142A-142C and back lighting with light sources 152A-152C, which may be R, G, B, W, IR, and NIR light sources, results in 4 images×6 spectra×3 cameras=72 images. Other numbers of images may be captured.
According to embodiments of the characterization and visual verification methods, the processing of the image data or pixel data by computer may involve image pre-processing including, for example, selection of optimally-exposed pixels from the multiple captured images at the different exposure times at each wavelength spectrum and for each image capture device 142A-142C, so as to generate optimally-exposed pixel data for each spectrum and for each viewpoint 1-3.
For each corresponding pixel (or patch of pixels) from each of the images captured by each of the image capture devices 142A-142C, pixels (or patches of pixels) exhibiting optimal image intensity may be selected from each of the different exposure images for each viewpoint 1-3. In some embodiments, optimal image intensity may be determined by the number of pixels (or patches pixels) that fall within a predetermined range of intensities, such as between 180 and 254 on a scale of 0-255, for example. In another embodiment, optimal image intensity may be between 16 and 254 on a scale of 0-255, for example. If more than one pixel (or patch of pixels) in the corresponding pixel (or patch of pixels) locations of two exposure images is determined to be optimally exposed, the image with the higher overall intensity may be selected.
The selected pixels (or patches of pixels) exhibiting optimal image intensity may be normalized by their respective exposure times. The result is a plurality of normalized and consolidated spectral image data sets for the illumination spectra (e.g., R, G, B, white light, IR, and/or NIR—depending on the combination used) and for each image capture device 142A-142C where all of the pixels (or patches of pixels) are optimally exposed (e.g., one image data set per spectrum) and normalized. In other words, for each viewpoint 1-3, the data pre-processing carried out by the computer 136 may result in a plurality of optimally-exposed and normalized image data sets, one for each illumination spectra employed.
Processing of the pixel data may further involve segmenting the pixelated images. For example, segmenting may involve classifying pixels within the pixel data as representing different objects within an image. The segmentation process may generate segmentation data used in a post processing step to quantify aspects of the specimen container 104 and/or the specimen 106, i.e., determine certain physical dimensional characteristics of the specimen container and/or the specimen 106, or constituents in the specimen 106, such as HIL, as described herein.
Additional reference is now made to
The physical characteristics at least partially determined by segmentation may include the location of the top (TC) of the specimen container 104, the height (HT) of the specimen container 104, the width (W) of the specimen container 104, the interior width (Wi) of the specimen container 104, and the thickness (Tw) of the wall of the tube 138. In addition, the segmentation data may provide the location of the liquid-air interface (LA), the total height (HTOT) of the specimen 106, and an upper location (GU) and a lower location (GL) of the gel separator 252. The difference between the upper location (GU) and the lower location (GL) provides the height (HG) of the gel separator 252. The characteristics may further include the height (HSP) of the serum or plasma portion 206SP and the height (HSB) of the settled blood portion 206SB. The segmentation may also provide the size and locations of the cap 140, the air gap 254, the serum or plasma portion 206SP, the gel separator 252, the settled blood portion 206SB, the specimen container 104, and the carrier 124. Segmentation may also include estimating a volume of the serum or plasma portion 206SP and/or a volume of the settled blood portion 206SB, and possibly a ratio there between. Other quantifiable geometrical features and locations of other objects may also be determined, such as the color of various components, such as the cap 140, or the type of cap 140.
In some embodiments, the specimen container 104 may be provided with the label 134, which may include identification information 234i (i.e., indicia) thereon, such as a barcode, alphabetic, numeric, or combinations thereof. The identification information 234i may be machine-readable at various locations including at the quality check module 102 (
The above-described identification information 234i may be provided on the label 134, which may be adhered to or otherwise provided on an outside surface of the specimen container 104. As shown in
Additional reference is made to
In more detail, when the specimen container 104 and/or the carrier 124 is illuminated, the computer 136 may generate and transmit signals via the communication lines 148A-148C to the image capture devices 142A-142C causing the image capture devices 142A-142C to capture images of the specimen container 104 and/or the carrier 124. The signals generated and transmitted by the computer 136 to the image capture devices 142A-142C may cause individual image capture devices 142A-142C to capture images of the specimen container 104 and or the carrier 124. In other embodiments, the signals generated and transmitted by the computer 136 may cause the image capture devices 142A-142C to simultaneously or sequentially capture images of the specimen container 104 and/or the carrier 124. When the image capture devices 142A-142C capture the images, the image capture devices 142A-142C generate the pixel data (otherwise referred to as “image data”) representative of the captured images, wherein pixel values refer to numerical values (e.g., intensity and/or wavelength values) of individual pixels in the pixel data. Sometimes, the term “pixel” refers to a pixel value in the pixel data. As shown in
The computer 136 may process the pixel data by using programs executing on the computer 136. In functional block 364, neural networks executing on the computer 136 may segment one or more images captured by the image capture devices 142A-142C into one or more objects as described in greater detail below. The one or more neural networks can comprise convolutional neural networks (CNNs), segmentation convolutional neural networks (SCNNs), deep semantic segmentation networks (DSSN), and other like segmentation neural networks. For example, the segmenting programs executing on the computer 136 may process or segment the pixel data to identify classes of pixels. Classes of pixels are pixels that have the same or similar characteristics. For example, a class of pixels may have pixel values of the same or similar wavelengths, intensities, and/or regional locations. Different classes of pixels may be located in different regions of an image corresponding to different objects constituting the specimen container 104 and/or the carrier 124. Individual classes of pixels of a single object may have similar colors and may be located proximate one another. For example, classes of pixels representative of the cap 140 may be in locations of the cap 140 and may all have close to the same color (e.g., wavelength of light). Likewise, classes of pixels representative of the specimen 106 may have certain colors and be located proximate one another within a region defined as the specimen container.
In some embodiments, each pixel in an image is assigned a class, such as by any suitable classification process. For example, pixels of a first wavelength may be assigned to a first class (e.g., the cap) and pixels of a second wavelength may be assigned to a second class (e.g., the serum or plasma portion). Other criteria, such as intensity, may be used to assign pixels to different classes.
The segmentation and identification may be performed by neural networks (e.g., trained neural networks) executing on the computer 136 (
To overcome appearance differences that may be caused by variations in specimen container type (e.g., size and/or shape), the SCNN may include a small container segmentation network (SCN) at the front end of the DSSN. The SCN may be configured and operative to determine a container type and a container boundary. The container type and container boundary information may be input via an additional input channel to the DSSN and, in some embodiments, the SCNN may provide, as an output, the determined container type and boundary. In some embodiments, the SCN may have a similar network structure as the DSSN, but shallower (i.e., with far fewer layers).
Additional reference is made to
In the embodiments of
The identification process may be performed by neural networks, such as convolutional neural networks (CNNs) and other programs executing in the computer 136, which are described above. As an example, the programs executing in the computer 136 may identify the class or classes of pixels constituting the segmented cap 440 as the cap 140 (
In some embodiments, the programs executing on the computer 136 may identify the class(s) of pixels constituting the segmented serum or plasma portion 406SP as the serum or plasma portion 206SP (
The computer 136 may output information to a user regarding identification of the objects. For example, the computer 136 may output information to a user indicating whether the specimen container 104 includes a cap 140, the color of the cap 140, and the type of cap. Likewise, the computer 136 may output information indicating whether the serum or plasma portion 206SP (
In order to improve the confidence of the identifications made by the computer 136 and programs executing therein, the computer 136 may generate signals to display an image of the specimen container 104 on the display 150 with delineated locations constituting selected objects of the specimen container 104 as described in functional block 368 of
Additional reference is made to
In the embodiment of
In the example described above, the programs executing on the computer 136 have identified the cap 140 (
The displayed specimen container 504B of
In the embodiment of
Reference is now made to
In some embodiments one or more selected objects are displayed as a confidence gradient (or level). For example, the identification process may be based on a voting or other criteria when an object is identified. The confidence gradient may be incorporated into the displayed objects to indicate the degree of confidence. For example, if the neural networks executing on the computer 136 (
In some embodiments, one or more images displayed on the display 150 include overlaying images representing the one or more objects over an image at least partially representing the specimen container 104. In such embodiments, locations of the objects are in locations of their respective pixels relative to the image at least partially representing a best view of the specimen container 104. For example, the best view may include the most exposed area of the serum or plasma portion 206SP.
A simple apparatus and functional flow chart of a segmentation method 660 according to one or more embodiments is shown in
After image capture, and optional background reduction in 664, segmentation may be undertaken in 668 by computer 136. The segmentation in 668 may include an image consolidation process that is undertaken in 670. During the image consolidation process in 670, the various exposure time images at each color spectra (R, G, B, white light, NIR and/or IR) and for each image capture device 142A-142C may be reviewed pixel by pixel to determine those pixels that have been optimally exposed. For each corresponding pixel location, the best of any optimally-exposed pixel is selected and included in an optimally-exposed image data set. Thus, following image consolidation in 670, there may be produced one optimally-exposed image data set for each spectrum and for each image capture device 142A-142C.
Following image consolidation in 670 or possibly concurrent therewith, a statistics generation process may be undertaken in 672, where statistics are generated for each pixel, such as mean and/or covariance matrix. This statistical data on the optimally-exposed data sets are then operated on by a multi-class classifier 674 to provide identification of the pixel classes present in the images in 676. The final class for each pixel may be determined my maximizing confidence values for each pixel. For each pixel location, a statistical description may be extracted within a pixel (e.g., a small super-pixel patch (e.g. 11×11 pixels)). Each super-pixel patch may provide a descriptor, which is considered in a training and evaluation process. The classifiers may operate on feature descriptors and use class labels for training and output class labels during testing/evaluation.
From the segmentation process of 668, each pixel in a consolidated image for each of the image capture devices 142A-142C is given a classification as one of a plurality of class types in 676. Class types may be liquid (serum or plasma portion 106SP), settled blood portion 106SB, tube 138, label 134, cap 140, gel separator 252, air gap 254A, for example. From this segmentation information, the objects associated with the above-described classes may be identified in 678. This may be performed by collecting together all the pixels (or pixel patches) of the same classification. In 679, the selected object can be displayed on display 150. For example, the selected objects can be displayed by user selection, or by displaying the objects in a sequence, such as a predetermined or selected sequence. For example, the delineated objects can be displayed in the order of: delineated cap 540, delineated label 534, delineated serum or plasma portion 506SP, delineated gel separator 552, delineated settled blood portion 506SB, delineated air gap 554, or even delineated carrier 524, or any other suitable order or subset thereof.
After the segmentation in 668, different objects and characteristics of the specimen 106 (
Liquid quantification may be carried out in 712 following segmentation in 668. Liquid quantification in 712 may involve the determination of certain physical and/or dimensional characteristics of the specimen 106 (
The results of the segmentation in 668 can also be used to identify the label 134 and the location of the label 134 (
The characterization of the cap 140 (
Additional reference is now made to
Although the disclosure is described herein with reference to specific embodiments, the disclosure is not intended to be limited to the details described. Rather, various modifications may be made in the details within the scope and range of equivalents of this disclosure without departing from the disclosure.
This application claims priority to U.S. Provisional Patent Application No. 62/733,972, filed Sep. 20, 2018, and titled “MACHINE LEARNING FOR IMAGE ANALYSIS VISUALIZATION TOOL,” which is hereby incorporated by reference herein in its entirety for all purposes.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2019/052011 | 9/19/2019 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62733972 | Sep 2018 | US |