The invention relates to a method and a laboratory system, in particular each, for determining at least one container information about a laboratory sample container.
US 2019/0108396 A1 discloses a method for identifying and tracking objects including: capturing one or more 3-D models of one or more objects in a scene using a three-dimensional (3-D) scanning system, the one or more 3-D models including color and geometry information of the one or more objects; and computing, by an analysis agent, one or more descriptors of the one or more 3-D models, each descriptor corresponding to a fixed-length feature vector; and retrieving metadata identifying the one or more objects based on the one or more descriptors.
US 2016/0018427 A1 discloses, that container identification data from a container inspection unit that analyzes a container containing a liquid is combined with liquid level detection raw data from a liquid level detection unit that analyzes the container containing the liquid and a liquid level detection result is generated. The liquid level detection result is cross-checked with additional data from the container inspection unit. The result can be used to plan a route for the container in the laboratory automation system.
JP 2019504997 A discloses methods and apparatus adapted to quantify a specimen from multiple lateral views.
It is the object of the invention to provide a method and a laboratory system, in particular each, for determining at least one container information about a laboratory sample container, in particular having improved properties.
This object is solved by a method and a laboratory system as defined in the independent claims. Preferred embodiments are defined in the dependent claims.
The invention relates to a method for determining at least one container information about a laboratory sample container, wherein the method comprises the steps:
This, in particular the fusing the brightness and/or color information and the depth information, enables determining the container information, in particular and thus inspecting the container, accurately and/or completely and/or reliably and/or with less uncertainty, in particular than would be possible when only one such information was used individually.
In particular the method, the determining, the acquiring and/or the fusing may be automatically.
The phrase “container property of” may be used synonymously for the phrase “container information about”. In particular the property may be a physical property.
The, in particular respective, information may comprise a content and/or a value.
The term “having” may be used synonymously for the term “comprising”.
The container may be made of glass or, in particular transparent and/or colorless, plastic or any other, in particular somewhat, solid material. Additionally or alternatively the container may be configured as a tube.
The term “capturing” may be used synonymously for the term “acquiring”.
The image and the map may be acquired from a, in particular same or identical, viewpoint. Additionally or alternatively the image and/or the map may be digital.
The image may be a brightness and/or color image. Additionally or alternatively the image may be two-dimensional.
The term “representing” or “containing” may be used synonymously for the term “comprising”.
The brightness information may be a brightness information of a, in particular given, color/s.
The term “data” may be used synonymously for the term “information”.
The container may be present in the possible region. Additionally or alternatively a container carrier for carrying, in particular for transporting, the container may be present in the possible region. In particular, an empty carrier may be present in the possible region in which the container could also be expected. Additionally or alternatively the carrier may have a white color.
The map may be a depth map. In particular the phrase “depth profile” may be used synonymously for the term “depth map”. Additionally or alternatively the map may be three-dimensional or have two dimensions and one additional, third dimension, wherein the third dimension may relate to a height over a plane defined by the other two dimensions, or indicate how an upper surface of a specific region changes height in two dimensions, respectively.
The depth information may relate to a, in particular perpendicular, distance of at least one surface of at least one, in particular scene, object from a, in particular the, viewpoint.
The phrase “distance information” may be used synonymously for the phrase “depth information”.
The depth information may be different and/or independent and/or complementary and/or correlated to the brightness and/or color information.
The term “combining” may be used synonymously for the term “fusing”.
Step b) may be performed before, simultaneously with and/or after step a).
Step c) may be performed after step a) and step b).
The method, in particular step a), step b) and/or step c), may be performed again and/or repeatedly, in particular multiple times.
In particular step b) may comprise non-contact measuring of the region. In particular the non-contact measuring may use ultrasound.
The image and the map are acquired in top view or from a top or upper, respectively, side or above, respectively. In other words: the light may be received from below. This enables acquiring the image comprising meaningful brightness and/or color information and/or the map comprising meaningful depth information, in particular and thus determining meaningful container information. In particular the term “important” or “relevant” may be used synonymously for the term “meaningful”.
The brightness and/or color information and/or the depth information are/is about at least one boundary, in particular a brightness, color and/or depth boundary, between two spatial regions occupied by different matter. This enables extracting the brightness and/or color information from the image and/or the depth information from the map in form of features formed by the at least one boundary. In particular the term “edge” may be used synonymously for the term “boundary”. Additionally or alternatively the phrase “spatial phases” may be used synonymously for the phrase “spatial regions”. Further additionally or alternatively the term “material” may be used synonymously for the term “matter”.
The container information is about, in particular is, an absence or a presence of the container in the region, a position of the container, in particular in the region, a presence or an absence of a cap on, in particular top of, the container, an absence or a presence of a laboratory sample in or contained by, respectively, the container, and/or a level of the sample in the container. This enables handling the container. In particular step a) may comprise: acquiring the image of the carrier, if present, the container, if present, the cap, if present, and/or the sample, if present. Additionally or alternatively step b) may comprise: acquiring the map of the carrier, if present, the container, if present, the cap, if present, and/or the sample, if present. Further additionally or alternatively the term “location” or “place” may be used synonymously for the term “position”. Further additionally or alternatively the container may have an opening, in particular at a top or upper, respectively, end. In particular the opening may be defined by an end of a wall and/or a circumference of the container. Additionally or alternatively the container or its opening, respectively, may be open or closed, in particular by the cap. Further additionally or alternatively the cap may comprise rubber and/or plastic or may completely consist of rubber and/or plastic. Further additionally or alternatively the cap may have a blue color. Further additionally or alternatively the cap may be embodied as a lid, in particular as a rigid lid, or as a foil, in particular a flexible foil. Further additionally or alternatively the sample may be a liquid sample and/or a blood sample or a urine sample. In particular the blood sample may have a red color and/or the urine sample may have a yellow color. Further additionally or alternatively typically the sample may not fill the container completely, in particular up to the opening. Further additionally or alternatively the map may have two dimensions and one additional, third dimension, wherein the third dimension may relate to a height parallel to a longitudinal axis of the container over two directions, which may be perpendicular to the axis and/or parallel to a plane defined by the opening. In particular the term “vertical” may be used synonymously for the term “longitudinal”. Further additionally or alternatively for the different cases the image and the map, in particular each, may be distinctly different.
The method comprises the step:
d) transporting or moving, respectively, gripping, decapping, capping, filling and/or defilling the container based on or in dependence of, respectively, the determined container information.
This, in particular the fusing the brightness and/or color information and the depth information, enables handling the container accurately and/or reliably and/or with less uncertainty, in particular than would be possible when only one such information was used individually.
In particular the handling may be automatically.
According to an embodiment of the invention, step a) comprises: acquiring the image comprising the brightness and/or color information by detecting visible light, in particular red light, green light and blue light. Additionally or alternatively step b) comprises: acquiring the map comprising the depth information by detecting infrared light, in particular near-infrared light. Such a light enables to have interacted with an, in particular the, object in the region, in particular to be reflected by the object. Additionally or alternatively the visible light and the infrared light enable to not disturb each other's detecting. In particular the term “sensing” or “receiving” may be used synonymously for the term “detecting”. Additionally or alternatively the, in particular respective, detecting may be automatically or a detecting of a, in particular respective, intensity of the light. Further additionally or alternatively, the visible light may have a wavelength in the range of 400 to 700 nm (nanometers). Further additionally or alternatively, the infrared light may have a wavelength longer than those of the visible light and/or in the range of 700 nm to 1 mm (millimeter), in particular to 1.4 μm (micrometer).
In particular step c), in particular the method, may not or does not have to be, in particular performed, by photogrammetry.
According to an embodiment of the invention, step b) comprises: acquiring the map comprising the depth information by a time-of-flight measurement. This enables a very quick and/or simple acquiring and/or an easy, in particular low computational load, processing of the acquired map.
In particular the image and the map may be acquired by separate devices, in particular cameras.
According to an embodiment of the invention, the image and the map are acquired by a, in particular same or identical, respectively, 3D camera, in particular a range camera, in particular a time-of-flight camera. This enables acquiring the image and the map from a, in particular the, same or identical viewpoint and/or, in particular thus, an easy, in particular low computational load, processing of the acquired image and the acquired map. In particular the 3D camera may be electric, in particular electronic. Additionally or alternatively the image and/or the map may not or does not have to be acquired by a stereo camera. Further additionally or alternatively the depth information may relate to or represent a, in particular perpendicular, distance of at least one surface of at least one, in particular scene, object and a plane of the, in particular scene, 3D camera.
According to an embodiment of the invention, step c) comprises: extracting the brightness and/or color information from the image and/or the depth information from the map in form of features. Additionally or alternatively step c) comprises: classifying the, in particular extracted, brightness and/or color information and/or the, in particular extracted, depth information and fusing the classified brightness and/or color information and/or the classified depth information. This enables generating small information spaces or amount, respectively, and/or, in particular thus, a low computational load fusing. In particular the extracting and/or the classifying may be automatically. Additionally or alternatively the term “extrapolating” or “detecting” may be used synonymously for the term “extracting”.
According to an embodiment of the invention, step c) is, in particular the extracting, the classifying and/or the fusing are/is, performed by an artificial neural network, rule-based decisions and/or machine learning. This enables a very good performing.
The invention further relates to a laboratory system for determining, in particular the, at least one container information about a, in particular the, laboratory sample container, in particular for performing the method according to the invention, wherein the system comprises an acquiring device and a determining device. The acquiring device is configured for acquiring an, in particular the, image comprising a, in particular the, brightness and/or color information of a, in particular the, possible region of the container and a, in particular the, map comprising a, in particular the, depth information of the region. The determining device is configured for determining the container information by fusing the brightness and/or color information and the depth information.
The image and the map are acquired in top view. The brightness and/or color information and/or the depth information are/is about at least one boundary between two spatial regions occupied by different matter. The container information is about an absence or a presence of the container in the region, a position of the container, a presence or an absence of a cap on the container, an absence or a presence of a laboratory sample in the container, and/or a level of the sample in the container.
The laboratory system comprises a handling device. The handling device is configured for transporting, gripping, decapping, capping, filling and/or defilling the container based on the determined container information. In particular the handling device may be electric. Additionally or alternatively the handling device may comprise or be a transporter or mover, respectively, in particular the carrier, a gripper, a decapper, a capper, a filler and/or a defiller. In particular the mover may be configured for moving the container with respect or in relation, respectively, to the acquiring device. Additionally or alternatively the mover may comprise or be a conveyor belt or band and/or a laboratory sample distribution system as described in EP 2 995 958 A1.
By this system, the same advantages as discussed above with regard to the method may be achieved.
In particular the system, the acquiring device and/or the determining device may be electric, in particular electronic.
The acquiring device may comprise or be a detector or sensor, respectively, in particular at least two detectors.
The determining device may comprise or be a processor and/or a memory.
The term “adapted” or “designed” may be used synonymously for the term “configured”.
The carrier may be configured for carrying the container aligned with respect or in relation, respectively, to the acquiring device.
In the following, embodiments of the invention will be described in detail with reference to the drawings. Throughout the drawings, the same elements will be denoted by the same reference numerals.
The system 10 comprises an acquiring device 11 and a determining device 12. The acquiring device 11 is configured for acquiring an image ibc comprising a brightness and/or color information bci of a possible region 2 of the container 1 and a map md comprising a depth information di of the region 2, in particular acquires. The determining device 12 is configured for determining the container information coi by fusing the brightness and/or color information bci and the depth information di, in particular determines.
The method comprises the steps:
In detail step a) comprises: acquiring the image ibc comprising the brightness and/or color information bci by detecting visible light vL, in particular red light rL, green light gL and blue light bL. Additionally or alternatively step b) comprises: acquiring the map md comprising the depth information di by detecting infrared light iL, in particular near-infrared light niL.
Furthermore, step b) comprises: acquiring the map md comprising the depth information di by a time-of-flight measurement TOF.
Moreover, the image ibc and the map md are acquired by a 3D camera 3, in particular a range camera 3′, in particular a time-of-flight camera 3″, in particular of the system 10, in particular the acquiring device 11.
In particular the system 10, in particular the acquiring device 11, comprises a RGB camera (red, green and blue). In the shown embodiment the 3D camera and the RGB camera are included into only one camera. In alternative embodiments the 3D camera and the RGB camera may be two cameras on one position or multiple positions.
Further, the image ibc and the map md are acquired in top view.
Furthermore, step c) comprises: extracting the brightness and/or color information bci from the image ibc and/or the depth information di from the map md in form of features feat, in particular by the determining device 12. Additionally or alternatively step c) comprises: classifying the, in particular extracted, brightness and/or color information bci and/or the, in particular extracted, depth information di and fusing the classified brightness and/or color information bci and/or the classified depth information di, in particular by the determining device 12.
Moreover, step c) is, in particular the extracting, the classifying and/or the fusing are/is, performed by an artificial neural network AI, rule-based decisions and/or machine learning, in particular by the determining device 12.
Further, the brightness and/or color information bci and/or the depth information di are/is about at least one boundary bou, in particular a brightness, color and/or depth boundary bcdbou, in particular forming the features feat, between two spatial regions 4a, 4b occupied by different matter 4ma, 4mb.
In the shown embodiment the acquiring device 11 and/or the 3D camera 2 is above the region 2. The region 2 extends in an x-y-plane. A height is given along a z-axis, in particular perpendicular to the x-y-plane.
In
In
Thus, from the image ibc and from the map md, in particular each, a boundary bou between the inner circle or the lower inner part, respectively, and the surrounding ring or the higher surrounding elevated part, respectively, and another boundary bou between the surrounding ring and the background are extracted.
Thus, the container information coi is about the presence of the carrier 13′ and the absence of the container 1 in the region 2.
In
Thus, from the image ibc and from the map md, in particular each, a boundary bou between the inner circle or the lower inner part, respectively, and the surrounding ring or the higher surrounding elevated part, respectively, another boundary bou between the surrounding ring and the another surrounding ring or the lower surrounding elevated part, respectively, and another boundary bou between the another surrounding ring and the background are extracted.
Thus, the container information coi is about the presence of the carrier 13′, the presence of the container 1 in the region 2, the position POS of the container 1, the absence of a cap 5 on the container 1 and the absence of a laboratory sample 6 in the container 1.
In
Thus, from the image ibc and from the map md, in particular each, a boundary bou between the inner circle or the less lower inner part, respectively, and the surrounding ring or the higher surrounding elevated part, respectively, another boundary bou between the surrounding ring and the another surrounding ring or the lower surrounding elevated part, respectively, and another boundary bou between the another surrounding ring and the background are extracted.
Thus, the container information coi is about the presence of the carrier 13′, the presence of the container 1 in the region 2, the position POS of the container 1, the absence of the cap 5 on the container 1, the presence of the sample 6 in the container 1 and the level LE of the sample 6 in the container 1.
In
Thus, from the image ibc and from the map md, in particular each, a boundary bou between the inner circle or the higher inner part, respectively, and the surrounding ring or the lower surrounding elevated part, respectively, and another boundary bou between the surrounding ring and the background are extracted.
Thus, the container information coi is about the presence of the carrier 13′, the presence of the container 1 in the region 2, the position POS of the container 1 and the presence of the cap 5 on the container 1.
Furthermore, the laboratory system 10 comprises the handling device 13. The handling device 13 is configured for handling the container 1 based on the determined container information coi, in particular handles.
Moreover, the method comprises the step: d) handling the container 1 based on the determined container information coi, in particular transporting or moving, respectively, gripping, decapping, capping, filling and/or defilling the container 1, in particular by the handling device 13.
In the shown embodiment the handling device 13 comprises the carrier 13. Such a handling device 13 is used in order to transport the container 1 on the underground or a transport plane, respectively. In particular the carrier 13′ may be only a part, in particular of the handling device, that is movable by external means. For example the carrier 13′ may comprise a magnet and electromagnetic actuators may be positioned below the transport plane in order to drive the carrier 13′. Thus, the combination of the electromagnetic actuators and the carrier 13′ can be regarded as the handling device 13.
This enables the system to know, whether the tube is overfilled or not, in particular to avoid spilling. Additionally or alternatively this enables to ensure, that the cap really has been removed, in particular for successful aliquoting. Further additionally or alternatively this enables to ensure, that the cap is present, in particular for storage and/or centrifugation. Further additionally or alternatively this enables to ensure a precise enough placement of the tube by the carrier, in particular to allow a gripper to take the tube.
As the shown and above discussed embodiments reveal, the invention provides a method and a laboratory system, in particular each, for determining at least one container information about a laboratory sample container, in particular having improved properties.
Number | Date | Country | Kind |
---|---|---|---|
22172151.7 | May 2022 | EP | regional |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2023/061331 | Apr 2023 | WO |
Child | 18935834 | US |