IMAGE PROCESSOR, COMPUTER-IMPLEMENTED IMAGE PROCESSING METHOD, COMPUTER PROGRAM AND NON-VOLATILE DATA CARRIER

Information

  • Patent Application
  • 20230384456
  • Publication Number
    20230384456
  • Date Filed
    October 12, 2021
    2 years ago
  • Date Published
    November 30, 2023
    5 months ago
Abstract
An image processor that obtains image data registered by a time-of-flight imaging system and representing a scene illuminated by two or more light sources calibrated to enable the image processor to include distance data in the image data, where the image processor determines if a shadow effect exists by which a first object in the scene obstructs light of at least one light source from reaching a part of a second object in the scene, and adjusts the distance data to compensate for the at least one light source for which light did not reach the part of the second object.
Description
TECHNICAL FIELD

The present invention relates generally to three dimensional (3D) image processing. Especially, the invention relates to an image processor for processing time-of-flight image data and a corresponding computer-implemented image processing method. The invention also relates to a corresponding computer program and a non-volatile data carrier storing such a computer program.


BACKGROUND

Modern cameras have become fairly advanced data collection sensors. Today, for example there are medical screening arrangements containing very accurate high-speed thermal cameras capable of registering body skin temperature en masse of people entering a building, or a public transportation vehicle. Additionally, various forms of depth sensing cameras are used to control vehicles and robots with remarkable precision. For example, modern milking robots often rely on time-of-flight (TOF) cameras for their operation. A TOF camera is a range imaging system that employs time-of-flight techniques to resolve distance between the camera and imaged objects for each point of the image. The TOF camera measures the round trip time of an artificial light signal provided by a laser or a light emitting diode (LED). Laser-based time-of-flight cameras are part of a broader class of scannerless (light detection and ranging) LIDAR, in which an entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam, such as in scanning LIDAR systems. TOF cameras may be used to cover ranges of a few centimeters up to several kilometers. Naturally, in automatic milking applications, the TOF camera operates in the shorter end of this range, say at distances up to a few decimeters.


U.S. Pat. No. 10,750,712 shows an implement for automatically milking a dairy animal, such as a cow. The implement comprises a milking parlour, a sensor for observing a teat, and a milking robot for automatically attaching a teat cup to the teat. The milking robot comprises a robot control that is connected to the sensor. The sensor comprises a radiation source for emitting light, a receiver for receiving electromagnetic radiation reflected from the dairy animal, a lens, and sensor control unit. The sensor comprises a matrix with a plurality of rows and a plurality of columns of receivers. The sensor control unit is designed to determine for each of the receivers a phase difference between the emitted and the reflected electromagnetic radiation in order to calculate the distance from the sensor to a plurality of points on the part to be observed of the dairy animal.


U.S. Pat. No. 10,430,956 describes an image processing method for reducing distortion of a depth image. The method involves: obtaining a plurality of original images based on light beams which are emitted to and reflected from a subject; determining original depth values of original depth images obtained from the plurality of original images, based on phase delays of the light beams, the reflected light beams comprising multi-reflective light beams that distort the original depth values; determining imaginary intensities of the multi-reflective light beams with respective to each phase of the multi-reflective light beams, based on regions having intensities greater than a predetermined intensity in the original depth images; correcting the original depth values of the original depth images, based on the imaginary intensities of the multi-reflective light beams. The method involves generating corrected depth images based on the corrected original depth values.


Thus, it is known to use a TOF camera in a milking implement. There is also a solution for reducing the distortion of a TOF camera's depth image based on corrected depth images.


However, interfering objects located between the TOF camera's light sources and an object of interest may still cause accuracy problems in terms of the distance measured to the object of interest, which for example may be represented by a teat to which a robot shall connect a teat cup of a milking machine.


SUMMARY

The object of the present invention is therefore to offer a solution that mitigates the above problem and produces high-accuracy distance data also in situations where one or more objects in an imaged scene partially prevent the light of the TOF camera's light sources from reaching various objects in the scene.


According to one aspect of the invention, the object is achieved by an image processor configured to obtain image data registered by a TOF imaging system. The image data represents a scene illuminated by at least two light sources that are calibrated to enable the image processor to produce distance data to be comprised in the image data. The distance data expresses respective distances from the TOF imaging system to points on imaged objects. The image processor is configured to determine if a shadow effect exists by which at least one first object in the scene obstructs light from at least one light source of the at least two light sources from reaching at least one part of at least one second object in the scene and be reflected there from into the TOF imaging system. If it is determined that the shadow effect exists, the image processor is configured to adjust the distance data to compensate for the at least one light source whose light did not reach the at least one part of the at least one second object.


The above image processor is advantageous because it produces accurate distance data also for a scene where the objects are located relative to the TOF imaging system and one another in such a way that some sectors of the scene are only illuminated by a subset of the TOF imaging system's light sources.


According to one embodiment of this aspect of the invention, the image processor is configured to adjust the distance data by modifying a piece of distance data expressing a distance to a point on a surface on the at least one part of the at least one second object by an adaptation amount. The piece of distance data is determined without consideration of the shadow effect, and the adaptation amount depends on which light source or light sources of the at least two light sources whose light did not reach said point. Here, the at least two light sources amount to a first total number n and the light source or light sources whose light did not reach said point amount to a second total number of at least one and n−1 or less. Consequently, it is straightforward to compensate for the light sources whose light does not reach the at least one part of the at least one second object.


The above adaptation may be implemented by the image processor being communicatively connected to a lookup table containing a data set that expresses the adaptation amount for each possible combination of light sources of the at least two light sources whose light did not reach said point. Said combination is applicable to a particular distance expressed by the distance data. Thus, for each such distance, the lookup table contains a data set that expresses the adaptation amount to be used depending on the specific light sources being shaded. Thereby, the image processor may obtain the relevant adaptation amount from the lookup table in a rapid and reliable manner.


According to another embodiment of this aspect of the invention, the at least one first object has a known position and spatial extension relative to the TOF imaging system and the at least two light sources. The image processor is further configured to adjust the distance data to compensate for the at least one of the at least two light sources whose light is obstructed from reaching at least one sector behind the at least one first object. Namely, in this scenario, it is known in advance which sectors of the TOF imaging system's view field that will always be partially shadowed by the at least one first object.


For example, the scene may contain a milking location, and at least one of the at least one first object may be a teat cup that is arranged on a carrying structure being mechanically linked to the TOF imaging system. In such a case, the image processor is preferably configured to adjust the distance data to compensate for the light sources whose light will be obstructed from reaching at least one sector behind the teat cup where a teat to which the teat cup is to be connected may be located.


According to yet another embodiment of this aspect of the invention, the image processor is configured to determine a distance to a potentially shadowing object in the scene, which potentially shadowing object is located at a shorter distance from the TOF imaging system than any other object in the scene. I.e. the image processor determines the nearest object. The image processor is further configured apply a reverse ray-tracing algorithm to establish at least one sector in the scene that is estimated to be shadowed by the nearest object with respect to light from at least one of the light sources. The image processor is then configured to include the potentially shadowing object in a group of candidates from which to select the at least one first object when determining if the shadow effect exists. Thus, the image processor establishes whether shadow effects exist in a structured manner.


Preferably, the image processor is specifically configured to execute a first processing step wherein a spatial position of a further potentially shadowing object in the scene is determined, which further potentially shadowing object is located at a shorter distance from the TOF imaging system than any other object in the scene that has not yet been included in the group of candidates; execute a second processing step wherein said reverse ray-tracing algorithm is applied to establish at least one sector in the scene which is estimated to be shadowed by the further potentially shadowing object with respect to light from at least one light source of the at least two light sources; and execute a third processing step wherein the further potentially shadowing object is included in the group of candidates from which to select the at least one first object when determining if the shadow effect exists.


The image processor is configured to repeat the first, second and third processing steps until a stop criterion has been fulfilled. Thereby, the shadow effects in the scene can be established to a desired degree of precision.


The stop criterion, in turn, may be set in response to a time constraint and/or a processing capacity constraint.


According to a further embodiment of this aspect of the invention, the image processor is configured to determine a first piece of the distance data expressing a first distance to a first surface area of the second object, which first surface area is located in a first sector illuminated by all of the at least two light sources. The image processor is likewise configured to determine a second piece of the distance data expressing a second distance to a second surface area of the at least one part of the at least one second object, which second surface area is located in a second sector illuminated by a subset of the at least two light sources. The determining of the second piece of the distance data here involves extrapolating the first surface area into the second sector.


Preferably, the extrapolating assumes that the at least one second object has a generally known shape. Namely, this facilitates the extrapolating process and enhances the data quality. Of course, this is especially true if the at least one second object indeed has a shape that is relatively close to the assumed generally known shape.


In one embodiment, the second number is zero. This means that the light from no light sources illuminates the second sector, and that consequently the TOF imaging system is “blind” here with respect to distance measuring. Nevertheless, the extrapolation according to the above is still possible provided that the at least one second object is assumed to have a generally known shape. According to yet another embodiment of this aspect of the invention, the scene contains a milking location, and the at least one second object is a teat of a milking animal. In such a case, a milking robot is readily controllable for example to attach teat cups to the animal based on the adjusted distance data being produced by the image processor.


According to still another embodiment of this aspect of the invention, the image data contains data that for each pixel in a set of pixels expresses a light intensity value, for instance representing a greyscale value. The image processor is further configured to adjust the intensity value of a pixel in said set by an adaptation intensity, which pixel represents a point on a surface of the at least one part of the at least one second object that is illuminated by light from less than all of the at least two light sources.


The intensity value is calculated without consideration of the shadow effect, and the adaptation intensity is proportional to a number of the at least two light sources whose light is obstructed from reaching said point. Thus, the more light sources that are obstructed, the larger the adaptation intensity becomes. Thereby, partially shadowed objects in the scene can be made somewhat brighter than what they otherwise would have been. This for example facilitates distinguishing the teats furthest from the TOF imaging system.


According to another aspect of the invention, the object is achieved by a computer-implemented image processing method involving obtaining image data registered by a TOF imaging system, which image data represents a scene illuminated by at least two light sources that are calibrated to enable an image processor to produce distance data to be comprised in the image data. The distance data expresses respective distances from the TOF imaging system to points on imaged objects. The method also involves determining if a shadow effect exists by which at least one first object in the scene obstructs light from at least one light source of the at least two light sources from reaching at least one part of at least one second object in the scene and be reflected there from into the TOF imaging system. If it is determined that the shadow effect exists, the method involves adjusting the distance data to compensate for the at least one light source whose light did not reach the at least one part of the at least one second object. The advantages of this method, as well as the preferred embodiments thereof, are apparent from the discussion above with reference to the system.


According to a further aspect of the invention, the object is achieved by a computer program loadable into a non-volatile data carrier communicatively connected to a processing unit. The computer program includes software for executing the above method when the program is run on the processing unit.


According to another aspect of the invention, the object is achieved by a non-volatile data carrier containing the above computer program.


Further advantages, beneficial features and applications of the present invention will be apparent from the following description and the dependent claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention is now to be explained more closely by means of preferred embodiments, which are disclosed as examples, and with reference to the attached drawings.



FIGS. 1, 2 schematically illustrate TOF imaging systems with associated light sources according to first and second embodiments of the invention;



FIG. 3 shows an example of a use case for a TOF imaging system according to one embodiment of the invention in a sideview;



FIG. 4 shows a top-view of the use case exemplified in FIG. 3;



FIGS. 5a, b illustrate a scene as viewed from a TOF imaging system according to one embodiment when the TOF imaging system is located at two different distances from the scene;



FIGS. 6-7 show side views of a scene devoid of and containing first and second objects respectively according to one embodiment of the invention;



FIGS. 8-9 illustrate how a piece of distance data may be adjusted according to one embodiment of the invention, which distance data expresses a distance to a point on a surface being illuminated by less than all light sources of the TOF imaging system;



FIG. 10 illustrates in detail how a first object in a scene may obstruct light from the light sources of the TOF imaging system from reaching various parts of a second object in the scene according to one embodiment of the invention;



FIGS. 11-12 illustrate in detail how sectors are determined according to one embodiment of the invention, which sectors are illuminated only by a subset of the light sources of the TOF imaging system due to a shadow effect caused by a first object with a known position and spatial extension relative to the TOF imaging system;



FIG. 13 shows a block diagram of a TOF imaging system and an image processor according to one embodiment of the invention; and



FIG. 14 illustrates, by means of a flow diagram, the general method according to a preferred embodiment of the invention.





DETAILED DESCRIPTION

In FIG. 1, we see a schematic illustration a TOF imaging system 110 according to a first embodiment of the invention. A set of light sources 121, 122, 123 and 124 respectively are arranged around a TOF imaging system 110. It is generally advantageous if the light sources 121, 122, 123 and 124 emit non-visible light in the infrared (IR) spectrum. However, it is not excluded that the light sources 121, 122, 123 and 124 emit light in other parts of the electromagnetic spectrum, such as in the form of visible light.


Here, where the set of light sources contains four light sources, the light sources are preferably arranged in a plane parallel to a sensor matrix of the TOF imaging system 110 and at 90 degrees angular offset from one another. For example, the light sources 121, 122, 123 and 124 may be attached to a frame 120 around a front lens 111 of the TOF imaging system 110. In any case, the light sources 121, 122, 123 and 124 are controlled by at least one control signal C in synchronization with how image data is registered by the sensor matrix of the TOF imaging system 110. For example, the at least one control signal C may control the light sources 121, 122, 123 and 124 so that there is a particular desired phase delay between each of the light beams emitted from the respective light sources. Namely, such calibration between the image data registration and the illumination of a scene renders it relatively straightforward to produce distance data expressing respective distances from the TOF imaging system 110 to different points on imaged objects in the scene. However, if for example, a common control signal C is arranged to control all the light sources 121, 122, 123 and 124 in a serial manner, and the signal lines interconnecting the light sources 121, 122, 123 and 124 for forwarding the common control signal C causes an unknown delay of the respective points in time at which the light beams from the different light sources 121, 122, 123 and 124 are emitted, the light sources 121, 122, 123 and 124 need to be individually calibrated to the sensor matrix of the TOF imaging system 110 to enable production of accurate distance data.



FIG. 2 schematically illustrates a TOF imaging system 110 according to a second embodiment of the invention, where n light sources 121, 122, 123, . . . , 12n are evenly distributed in a circular shaped arrangement around the TOF imaging system 110. Here, if the light sources 121, 122, 123, . . . , 12n are controlled by a common control signal C as outlined above, and the interconnecting signal lines introduce an equal delay tD of the common control signal C between each light source, the n:th light source 12n would start to emit its light beam at a point in time n·tD after that the first light source 121 started to emit its light beam. In practice, however, this is rarely the case. Instead, the emission of light from the different light sources typically starts at points in time which are at least uncertain to some degree. As will be discussed below, this, in turn, may cause a degraded accuracy in the distance data being produced by the TOF imaging system 110.



FIGS. 3 and 4 show side and top views respectively of a TOF imaging system according to the invention, which is used for controlling a milking robot.


An image processor 140 is here configured to obtain image data Dimg that is registered by a TOF imaging system 110. The image data Dimg represents a scene 100, which in this case contains an udder to whose teats a carrying structure RA of the milking robot shall attach teat cups TC based on the image data Dimg. The scene 100 is illuminated by at least two light sources, for example as described above with reference to FIG. 1. Specifically, in the embodiment of FIGS. 3 and 4, there are four light sources 121, 122, 123 and 124 respectively, which are calibrated to enable the image processor 140 to produce distance data to be comprised in the image data Dimg. The distance data expresses respective distances from the TOF imaging system 110 to points on imaged objects. For example, in the image data Dimg, a piece of distance data is associated to each pixel, which piece of distance data expresses a distance from the TOF imaging system 110 to a point on an object in the scene 100 that is represented by that particular pixel.


In order to enhance the quality of the distance data, the image processor 140 is configured to determine if a shadow effect exists by which at least one first object in the scene 100 obstructs light from at least one of the light sources from reaching at least one part of at least one second object in the scene 100 and be reflected there from into the TOF imaging system 110.


In the example shown in FIGS. 3 and 4, two such shadow effects are illustrated, namely (a) where a first object in the form of a teat cup TC obstructs light from all the light sources 121-124 from reaching a shadow sector BTC1234 behind the teat cup TC; and (b) where a first object in the form of a front right teat FRT obstructs light from the light sources 123 and 124 from reaching a part of a second object in the form of a rear right teat RRT that is located in a shadow sector BFRT. Fortunately, at the illustrated position of the structure RA carrying the teat cup TC, no teat is located in a shadow sector BTC34 in which light from the light sources 123 and 124 is obstructed by the teat cup TC. Thus, light from all the light sources reach an upper part of the front right teat FRT. Nevertheless, due to the limited angular coverage of the light sources 123 and 124, the left-side teats FLT and RLT are only illuminated by the light sources 121 and 122. Moreover, as can be seen in FIG. 3, the teat cup TC shadows the lower light sources 122 and 123, so that the light from these light sources only reaches the upper parts of the front teats FRT and FLT.


If the image processor 140 determines that the shadow effect exists with respect to at least one part of at least one second object as described above, the image processor 140 adjusts the distance data to compensate for the at least one light source, i.e. 121, 122, 123 and/or 124 whose light did not reach the at least one part of the at least one second object FLT, FRT, RLT and/or RRT. The principles for how to adjust the distance data will be discussed below with reference to FIGS. 8 and 9.


According to one embodiment of the invention, the image processor 140 is configured to determine if a shadow effect exists as follows. The image processor 140 determines a distance to a potentially shadowing object in the scene 100, such as the teat cup TC, the front right teat FRT and/or the front left teat FLT, which potentially shadowing object is located at a shorter distance from the TOF imaging system 110 than any other object in the scene 100. The image processor 140 then applies a reverse raytracing algorithm to establish sectors in the scene 100 estimated to be shadowed by the potentially shadowing object TC, FRT and/or FLT with respect to light from at least one light source of the at least two light sources 121, 122, 123 and/or 124. In FIG. 4, BTC1234 is a first example of such a sector behind the teat cup TC which is not illuminated by any light source at all, BTC12 is a second example of such a sector behind the teat cup TC which is illuminated only by the light sources 123 and 124, BTC34 is a third example of such a sector behind the teat cup TC which is illuminated only by the light sources 121 and 122, and BFRT is a fourth example of such a sector behind the teat cup TC which is illuminated only by the light sources 121 and 122. Thereafter, the image processor 140 includes the potentially shadowing object TC, FRT and/or FLT in a group of candidates from which to select the at least one first object when determining if the shadow effect exists. Of course, if the TOF imaging system 110 is used for controlling a milking robot, there is a substantial chance that the potentially shadowing object is a teat, namely a teat located closer to the lens of the TOF imaging system 110 than another teat.


Preferably, the image processor 140 applies a stepwise procedure to determine if the shadow effect exists. Specifically, the image processor 140 may be configured to execute first, second and third processing steps according to the below.


In the first processing step, the image processor 140 determines a spatial position of a further potentially shadowing object in the scene 100. The further potentially shadowing object is located at a shorter distance from the TOF imaging system 110 than any other object in the scene 100 that has not yet been included in the above group of candidates.


In the second processing step, the image processor 140 applies said reverse ray-tracing algorithm to establish at least one sector in the scene 100 that is estimated to be shadowed by the further potentially shadowing object with respect to light from at least one light source of the at least two light sources 121, 122, 123, and/or 124.


In the third processing step, the image processor 140 includes the further potentially shadowing object in the group of candidates from which to select the at least one first object TC, FLT and/or FRT when determining if the shadow effect exists.


For accuracy reasons, it is desirable if the image processor 140 is configured to repeat the first, second and third processing steps until a stop criterion has been fulfilled. The stop criterion may be set in response to a time constraint and/or a processing capacity constraint on the image processor 140.



FIG. 5a illustrates the scene 100 as viewed from the TOF imaging system 110 according to one embodiment of the invention, where the TOF imaging system 110 is located at a first distance from the scene 100. FIG. 5b illustrates the scene 100 in FIG. 5a viewed from the TOF imaging system 110, where the TOF imaging system 110 is located at a second distance from the scene 100, which is shorter than the first distance. Thus, the image data Dimg1 registered from the first distance may contain pictorial information and distance data describing multiple teats FLT, FRT, RLT and/or RRT, whereas the image data Dimg2 registered from the second distance may contain pictorial information and distance data that exclusively describes a single teat, for example FRT, to which the milking robot is to attach a teat cup TC. According to one embodiment of the invention, the teat cup TC is arranged on the carrying structure RA, which is mechanically linked to the TOF imaging system 110. It is further preferable if the TOF imaging system 110 has a fixed position relative to the carrying structure RA. Namely, this facilitates determining the above-mentioned shadow effect on beforehand. The fixed positional interrelationship between the TOF imaging system 110 renders it possible to know, prima facie, which shadow effects that will be caused by the teat cup TC when placed in the carrying structure RA. However, it also causes the teat cup TC to potentially shadow a larger number of interesting objects when the TOF imaging system 110 is located at a longer distance from the scene 100 than when it is located at a shorter distance there from.



FIG. 6 shows a side view of a scene 100 devoid of any objects, and FIG. 7 shows a corresponding side view of a scene 100 containing first and second objects TC and FRT respectively. For reasons of clarity, we assume that only two light sources 121 and 122 are arranged around the TOF imaging system 110. Here, the light sources 121 and 122 are arranged in a linear configuration, i.e. where each light source has a 180 degree spatial angle to its nearest neighbor in a plane parallel to a sensor of the TOF imaging system 110. We further assume that the TOF imaging system 110 has a field of view FV110 and that the light from the light sources 121 and 122 cover light cones LC121 and LC122 respectively. In FIG. 6, we see that none of the light cones LC121 and LC122 illuminate a sector S0 immediately in front of the TOF imaging system 110, and that both of the light cones LC121 and LC122 illuminate a sector S12 further away from the TOF imaging system 110. Exclusively, the light source 121 illuminates an upper sector S1, and exclusively the light source 122 illuminates a lower sector S2. However, the key objects in the scene 100 are expected to be located in the sector S12 that is illuminated by both of the light cones LC121 and LC122.


In FIG. 7, the scene 100 contains such a key object of interest in the form of the teat FRT. The scene 100 also contains a teat cup TC, which is to be attached to the teat FRT by a milking robot (not shown). Since, typically, the milking robot controls the teat cup TC to approach the teat FRT somewhat from below, the teat cup TC shadows more of the light in the light cone LC122 of the light source 122 that is arranged below the TOF imaging system 110 than the light in the light cone LC121 of the light source 121 that is arranged above the TOF imaging system 110. The light from both the light sources 121 and 122 is obstructed from reaching a lower sector S0 behind the teat cup TC, while the light from the light source 121 reaches an upper sector S1 behind the teat cup TC. The light from the light source 122 is however obstructed from upper sector S1 due to the teat cup's TC position in relation to the angular width and orientation of the light cone LC122 of the light source 122.


It should be noted that the field of view of the TOF imaging system 110 and the respective angular widths of illumination of the light sources shown in this document are merely used for illustrative purposes. Preferably, the light sources should have wider illumination sectors than what is illustrated in the drawings, say in the order of 150 to 170 degrees. It is typically advantageous if also the field of view of the TOF imaging system 110 is somewhat wider than what is illustrated in the drawings.


Referring now to FIG. 8, we will explain how the image processor 140 adjusts the distance data dD according to one embodiment of the invention to compensate for the at least one light source(s) whose light did not reach the at least one part of the at least one second object. FIG. 8 illustrates how a piece of the distance data may be adjusted, which distance data expresses a distance to a point on a surface FRT that is illuminated by less than all light sources of the TOF imaging system 110.


Let us here assume that the surface represents a second object in the form of the teat FRT being visible to the TOF imaging system 110 in FIG. 7. Here, an upper part FRTS12 of the surface is illuminated by both the light sources 121 and 122, while a lower part FRTS1 of the surface is illuminated only the light source 121.


The TOF imaging system 110 is designed to determine a line-of-sight distance dP1 between a sensor 810 in the TOF imaging system 110 and respective points on objects reflecting the light from the light sources 121 and 122. Based on the line-of-sight distance dP1, in turn, the TOF imaging system 110 may be further configured to determine an orthogonal distance dD to said points. However, the determinations made by the TOF imaging system 110 presupposes that each of said points has received light from all illuminating light sources associated with the TOF imaging system 110. Here, the light sources are and represented by 121 and 122, they may be calibrated to the TOF imaging system 110, for instance by applying information about phase delays between the emitted light beams. The inventor has found that if, due to a shadow effect, a surface on an imaged object is illuminated by light from less than all the light sources associated with the TOF imaging system 110, the distance data dD determined by the TOF imaging system 110 shall be adjusted by an adaptation amount. The adaptation amount, in turn, depends on which specific light beams that were shadowed with respect to the surface in question.


As explained briefly above with reference to FIGS. 1 and 2, the TOF imaging system 110 is calibrated to receive reflected light from all the light sources in order to produce the distance data accurately.


Referring now to FIG. 9, we see a diagram where the horizontal axis designates pieces of distance data dD expressing respective distances to points P1, P2, . . . , P6, P7, . . . , Pn on the surface of the second object FRT, and the vertical axis indicates said points. In the illustrated example, it is assumed that the points P1, P2, . . . , P6 are located on the part of the second object FRT whose surface FRTS1 is illuminated only by the light source 121. In other words, the light source 122 is shadowed with respect to the surface FRTS1. The points P7, . . . , Pn, however, are located on the part of the second object FRT whose surface FRTS12 is illuminated by all/both light sources 121 and 122. Therefore, the TOF imaging system 110 determines a substantially equal and presumably correct orthogonal distance dDB to the points P7, . . . , Pn on the surface FRTS12, and a slightly incorrect orthogonal distance dD′ to the points P1, P2, . . . , P6 on the surface FRTS1 with a gradually increasing skew in a transition zone between the two surfaces FRTS12 and FRTS1. Here, the interrelationship between the light sources 121 and 122 turned out to be such that in the absence of reflected light from the light source 122, the TOF imaging system 110 determines the orthogonal distance dD′ to the points P1, P2, . . . , P6 on the surface FRTS1 to a somewhat longer measure dΔ than if light from both the light sources 121 and 122 had been reflected on this surface. FIG. 9 illustrates this effect.


Analogously, if the light from the light source 121 were shadowed with respect to the surface, the TOF imaging system 110 would instead have determined the orthogonal distance to this surface to a somewhat shorter distance, more precisely by an amount equal to the measure −dΔ.


In fact, for a given TOF imaging system 110, and for each piece of distance data dD′ determined by the TOF imaging system 110, an appropriate adjustment amount can be determined, which adjustment amount depends on which specific light sources that are shadowed. As discussed above, the adjustment amount may be either positive or negative. Moreover, the magnitude of the adjustment amount may vary with the distance data dD′ as well as the number of light sources being shadowed. Nevertheless, for a range of pieces of distance data dD′ determined by the TOF imaging system 110, such appropriate adjustment amounts can be determined at well-defined increments, i.e. sampling points, by measuring actual physical distances to different points in the TOF imaging system's 110 field of view and comparing these measurements with corresponding pieces of distance data dD′ determined by the TOF imaging system 110 under various shadowing conditions.


According to one embodiment of the invention, the image processor 140 is configured to adjust the distance data dD by modifying a piece of distance data dD′ expressing a distance to a point on a shadowed surface, such as the above-mentioned at least one part of the at least one second object FLT, by an adaptation amount dΔ. The piece of distance data dD′ is determined without consideration of the shadow effect, i.e. straight out of the TOF imaging system 110. The adaptation amount dΔ depends on which light source or light sources whose light did not reach said point. The at least two light sources amount to a first total number n and the light source or light sources whose light did not reach said point amount to a second total number of at least one and n−1 or less. Thus, if the system includes two light sources, no more than one light source can be shadowed with respect to said point. The sign and the magnitude of the adaptation amount dΔ may either be positive or negative depending on the interrelationship between the shadowed and non-shadowed light sources.


For example if the system is set to operate at a relatively short and known range of distances, such as in a milking installation, it may be advantageous to register appropriate values of the adaptation amount dΔ in advance, and store these values in a lookup table for easy access by the image processor 140. Naturally, such an approach may be advantageous irrespective of the operating distance. I.e. the lookup-table design is generally beneficial for any range of operation for the TOF imaging system 110.



FIG. 13 shows a block diagram of a TOF imaging system 110 and an image processor 140 according to one embodiment of the invention, where the image processor 140 is communicatively connected to a lookup table 1350. The lookup table 1350 contains a data set expressing the adaptation amount dΔ for each possible combination of the light sources 121, 122, 123 and/or 124 whose light did not reach a point on a surface for at least one distance expressed by the distance data dD. For a milking installation, the distance data dD typically ranges from 2 dm to 2 m, and more preferably the distance data dD ranges from 2 dm to 5 dm. In any case, the image processor 140 is configured to obtain the adaptation amount dΔ from the lookup table 1350 using a base orthogonal distance dDB determined by the TOF imaging system 110.


It is generally advantageous if the image processor 140 is configured to effect the above-described procedure in an automatic manner by executing a computer program 1327. Therefore, the image processor 140 may include a memory unit 1326, i.e. nonvolatile data carrier, storing the computer program 1327, which, in turn, contains software for making processing circuitry in the form of at least one processor 1325 in the image processor 140 execute the actions mentioned in this disclosure when the computer program 1327 is run on the at least one processor 1325.


According to one embodiment of the invention, the image data Dimg contains data that for each pixel in a set of pixels expresses a light intensity value. In this embodiment, the image processor 140 is further configured to adjust the intensity value of a pixel in the set of pixels by an adaptation intensity as described below.


Here, we assume that the pixel represents one of the points P1, P2, . . . , P6 on a surface FLTS1 of the at least one part of the second object that is illuminated by light from less than all of the at least two light sources 121, 122, 123 and 124. The intensity value is calculated without consideration of the shadow effect; and, the adaptation intensity is proportional to a number of the light sources 121, 122, 123 and/or 124 whose light is obstructed from reaching said point P1, P2, . . . , P6. Consequently, the pictorial data quality can be enhanced and any partially shadowed objects in the scene can be made somewhat brighter and easier to distinguish visually. This, in turn, facilitates distinguishing the rearmost teats RLF and RRT, i.e. the teats that are furthest from the TOF imaging system 110.



FIG. 10 illustrates further details on how a first object FLT in a scene 100 may obstruct the light from the light sources 121, 122, 123 and 124 of the TOF imaging system 110, so that this light does not reach parts of a second object RLT in the scene 100. In particular, with reference to FIG. 10, we will discuss how the image processor 140 may adjust the distance data dD determined by the TOF imaging system 110 according to embodiments of the invention.


According to one embodiment of the invention, the image processor 140 is configured to determine a first piece of the distance data dD expressing a first distance to a first surface area ARLT1234 of the second object RLT, which first surface area ARLT1234 is located in a first sector S1234 illuminated by all four light sources 121, 122, 123 and 124. The image processor 140 is further configured to determine a second piece of the distance data dD expressing a second distance to a second surface area ARLT23 of the at least one part of the at least one second object RLT, which second surface area ARLT23 is located in a second sector S23 illuminated only by a subset of the light sources 121, 122, 123 and 124, namely by the light sources 122 and 123. The image processor 140 determines the second piece of the distance data dD according to a procedure that involves extrapolating the first surface area ARLT1234 into the second sector S23. Due to the better illumination, the distance to the first surface area ARLT1234 can be determined with higher accuracy. In fact, such extrapolating can even be carried out over smaller surfaces that are not illuminated by any light sources at all.


Of course, the extrapolating can be improved if the image processor 140 can assume that the at least one second object RLT has a generally known shape. This type of extrapolating may for example be applied if the scene 100 contains a milking location, and the at least one second object RLT is a teat of a milking animal. Indeed, in a milking scenario, the specific shape of the at least one second object RLT may be known to a comparatively high degree of accuracy. Namely, provided that the milking animals are identified, the physical characteristics of each teat of each animal may have been measured and stored in a database to which the image processor 140 has access. Consequently, the image processor 140 may determine distances to the surfaces of the teats with very high accuracy, also if some of these surfaces are partially shadowed with respect to one or more of the light sources 121, 122, 123 and/or 124 associated to the TOF imaging system 110.


In addition to objects in the scene 100 shadowing one another from the light emitted by the light sources, shadow effects may occur due to known objects that are always present in front of the TOF imaging system 110, and for example form part of a milking robot. FIGS. 11 and 12 illustrate an example of this in side and top views respectively. More precisely, if there is a first object TC with a known position and spatial extension relative to the TOF imaging system 110, sectors can be determined a priori, which sectors are illuminated only by a subset of the light sources 121, 122, 123 and 124 of the TOF imaging system 110 due to a shadow effect caused by the first object TC because of its position and spatial extension relative to the TOF imaging system 110.


According to one embodiment of the invention, the image processor 140 is configured to adjust the distance data dD determined by the TOF imaging system 110 in order to compensate for the at least one of the light sources 121, 122, 123 and/or 124 whose light is obstructed from reaching at least one sector behind the first object TC. Specifically, in the example shown in FIG. 11, the image processor 140 is configured to adjust the distance data dD in sectors S14 above and behind the first object TC by a first adaptation amount because the sector S14 is only illuminated by a first subset of the four light sources 121, 122, 123 and 124, namely the light sources 121 and 124. The image processor 140 is further configured to adjust the distance data dD in a sector S23 in front of the first object TC by a second adaptation amount because the sector S23 is only illuminated by a second subset of the four light sources 121, 122, 123 and 124, namely the light sources 122 and 123. Additionally, in a third sector S03 behind the first object TC, the image processor 140 is incapable of producing any distance data dD because the third sector S03 is not illuminated by any of the light sources 121, 122, 123 and 124. However, in fourth sector S1234 immediately in front of and above the first object TC, no prima facie distance adjustment is needed. Namely, here, the first object TC does not cause any shadow effect at all.


Similarly, referring to FIG. 12, the image processor 140 is configured to adjust the distance data dD in a fourth sector S12 by a fourth adaptation amount because the sector S12 is only illuminated by a fourth subset of the four light sources 121, 122, 123 and 124, namely the light sources 121 and 122; and adjust the distance data dD in a fifth sector S34 by a fifth adaptation amount because the sector S34 is only illuminated by a fifth subset of the four light sources 121, 122, 123 and 124, namely the light sources 123 and 124. The image processor 140 is not capable of producing any distance data dD in a sixth sector S06 behind the object TC either because the sector S06 is not illuminated by any of the light sources 121, 122, 123 and 124. However, in a seventh sector S1234 immediately in front of and on both sides of the first object TC no prima facie distance adjustment is needed, since here the first object TC does not cause any shadow effect at all.


In order to sum up, and with reference to the flow diagram in FIG. 14, we will now describe the computer-implemented image processing method according to a preferred embodiment of the invention.


In a first step 1410, image data Dimg is obtained, which image data Dimg have been registered by a TOF imaging system 110 and which image data Dimg represents a scene 100 illuminated by at least two light sources calibrated to enable an image processor 140 to produce distance data dD to be comprised in the image data Dimg. The distance data dD expresses respective distances from the TOF imaging system 110 to points on imaged objects in the scene 100.


A step 1420 thereafter determines if a shadow effect exists by which at least one first object in the scene 100 obstructs light from at least one light source of the at least two light sources from reaching at least one part of at least one second object in the scene 100 and be reflected there from into the TOF imaging system 110. If it is determined that the shadow effect exists, a step 1430 follows, and otherwise the procedure loops back to step 1410.


In step 1430, the distance data dD produced by the TOF imaging system 110 is adjusted to compensate for the at least one light source whose light did not reach the at least one part of the at least one second object. The adjustment is made depending on which specific light source or light sources whose light did not reach the at least one part of the at least one second object. Subsequently, the procedure loops back to step 1110.


All of the process steps, as well as any sub-sequence of steps, described with reference to FIG. 14 may be controlled by means of a programmed processor. Moreover, although the embodiments of the invention described above with reference to the drawings comprise processor and processes performed in at least one processor, the invention thus also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice. The program may be in the form of source code, object code, a code intermediate source and object code such as in partially compiled form, or in any other form suitable for use in the implementation of the process according to the invention. The program may either be a part of an operating system, or be a separate application. The carrier may be any entity or device capable of carrying the program. For example, the carrier may comprise a storage medium, such as a Flash memory, a ROM (Read Only Memory), for example a DVD (Digital Video/Versatile Disk), a CD (Compact Disc) or a semiconductor ROM, an EPROM (Erasable Programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a magnetic recording medium, for example a floppy disc or hard disc. Further, the carrier may be a transmissible carrier such as an electrical or optical signal which may be conveyed via electrical or optical cable or by radio or by other means. When the program is embodied in a signal, which may be conveyed, directly by a cable or other device or means, the carrier may be constituted by such cable or device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted for performing, or for use in the performance of, the relevant processes.


Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.


The term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components. The term does not preclude the presence or addition of one or more additional elements, features, integers, steps or components or groups thereof. The indefinite article “a” or “an” does not exclude a plurality. In the claims, the word “or” is not to be interpreted as an exclusive or (sometimes referred to as “XOR”). On the contrary, expressions such as “A or B” covers all the cases “A and not B”, “B and not A” and “A and B”, unless otherwise indicated. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.


It is also to be noted that features from the various embodiments described herein may freely be combined, unless it is explicitly stated that such a combination would be unsuitable.


The invention is not restricted to the described embodiments in the figures, but may be varied freely within the scope of the claims.

Claims
  • 1. An image processor (140), configured to: obtain image data (Dimg) registered by a time-of-flight (TOF) imaging system (110), said image data (Dimg) representing a scene (100) illuminated by two or more light sources calibrated to enable the image processor (140) to produce distance data (dD) to be comprised in the image data (Dimg), said distance data (dD) expressing respective distances from the TOF imaging system (110) to points on objects imaged by the imaging system (110) in the scene;determine if a shadow effect exists, by which a first object of the objects in the scene (100) obstructs light from at least one light source of the two or more light sources from reaching a part of a second object of the objects in the scene (100) and from being reflected from the part of the second object to the TOF imaging system (110); andupon determination that the shadow effect exists, adjust the distance data (dD) to compensate (dΔ) for the at least one light source for which the light is obstructed and does not reach the part of the one second object.
  • 2. The image processor (140) according to claim 1, wherein the distance data (dD) is adjusted by modifying a piece of distance data (dD′) expressing a distance to a point on a surface (FLTS1) on the part of the second object (FLT) by an adaptation amount (dΔ), the piece of distance data (dD′) being determined without consideration of the shadow effect, and the adaptation amount (dΔ) depending on a determination via the imaging system of which one or ones of the two or more light sources emit light that does not reach said point, where a count of the two or more light sources equals a first total number n, and a count of the one or ones of the two or more light sources for which light does not reach said point equals a second total number of at least one and is n−1 or less.
  • 3. The image processor (140) according to claim 1, wherein: the image processor (140) is communicatively connected to a lookup table (1450) comprising a data set of adaptation amounts respective of each possible combination of light sources of the two or more light sources for which light does not reach said point for at least one distance expressed by the distance data (dD), andthe image processor (140) is configured to adjust the distance data (dD) by modifying a piece of distance data (dD′) based on an adaptation amount (dΔ) drawn from the adaptation amounts of the lookup table (1450), the piece of distance data (dD′) expressing a distance to a point on a surface (FLTS1) on the part of the second object (FLT), the piece of distance data (dD′) being determined without consideration of the shadow effect.
  • 4. The image processor (140) according to claim 1, wherein the first object (TC) has a known position and spatial extension relative to the TOF imaging system (110) and the two or more light sources, and the image processor (140) is configured to: adjust the distance data (dD) to compensate for the at least one of the two or more light sources for which light is obstructed from reaching the part of the second object behind the first object (TC).
  • 5. The image processor (140) according to claim 4, wherein the scene (100) comprises a milking location, and the first object is a teat cup (TC) arranged on a carrying structure (RA) mechanically linked to the TOF imaging system (110).
  • 6. The image processor (140) according to claim 4, wherein the scene (100) comprises a milking location, and the first object is a teat of a milking animal.
  • 7. The image processor (140) according to claim 1, further configured to: determine a distance (dD) to a potentially shadowing object in the scene (100), said potentially shadowing object is located at a shorter distance from the TOF imaging system (110) than any other object in the scene (100);apply a reverse ray-tracing algorithm to establish at least one sector in the scene (100) estimated to be shadowed by the potentially shadowing object with respect to light from at least one light source of the two or more light sources; andinclude the potentially shadowing object in a group of candidates from which to select the at least one first object when determining if the shadow effect exists.
  • 8. The image processor (140) according to claim 7, further configured to: carry out a first processing step wherein a spatial position of a further potentially shadowing object (FLT) in the scene (100) is determined, said further potentially shadowing object (FLT) located at a shorter distance from the TOF imaging system (110) than any other object in the scene (100) that has not yet been included in the group of candidates;carry out a second processing step wherein said reverse ray-tracing algorithm is applied to establish at least one sector in the scene (100) estimated to be shadowed by the further potentially shadowing object (FLT) with respect to light from at least one light source of the two or more light sources; andcarry out a third processing step wherein the further potentially shadowing object (FLT) is included in the group of candidates from which to select the at least one first object when determining if the shadow effect exists.
  • 9. The image processor (140) according to claim 8, further configured to repeat the first, second and third processing steps until a stop criterion has been fulfilled.
  • 10. The image processor (140) according to claim 9, wherein the stop criterion is set in response to at least one of: a time constraint, and a processing capacity constraint.
  • 11. The image processor (140) according to claim 1, further configured to: determine a first piece of the distance data (dD) expressing a first distance to a first surface area (ARLT1234) of the second object (RLT) which first surface area is located in a first sector (S1234) illuminated by all of the two or more light sources; anddetermine a second piece of the distance data (dD) expressing a second distance to a second surface area (ARLT23) of the part of the second object (RLT), said second surface area located in a second sector (S23) illuminated by a subset of the two or more light sources, and the determining of the second piece of the distance data (dD) includes extrapolating the first surface area (ARLT1234) into the second sector (S23).
  • 12. The image processor (140) according to claim 11, wherein the extrapolating assumes that the second object (RLT) has a generally known shape.
  • 13. The image processor (140) according to claim 12, wherein the second number is zero.
  • 14. The image processor (140) according to claim 11, wherein the scene (100) comprises a milking location, and the second object (RLT) is a teat of a milking animal.
  • 15. The image processor (140) according to claim 1, wherein the image data (Dimg) comprises data that expresses a light intensity value for each pixel in a set of pixels, and the image processor (140) is further configured to: adjust a light intensity value of a pixel in the set of pixels by an adaptation intensity, said pixel representing a point on a surface (FLTS1) of the part of the second object being illuminated by light from less than all of the two or more light sources, the light intensity value being calculated without consideration of the shadow effect, and the adaptation intensity being proportional to a count of light sources of the two or more light sources for which light is obstructed from reaching said point, where a count of the two or more light sources amount to a first total number n, and a count of the light sources for which light is obstructed from reaching said point equals a second total number of at least one and is n−1 or less.
  • 16. A computer-implemented image processing method, comprising: obtaining image data (Dimg) registered by a time-of-flight (TOF) imaging system (110), said image data (Dimg) representing a scene (100) illuminated by two or more light sources calibrated to enable an image processor (140) to produce distance data (dD) to be comprised in the image data (Dimg), said distance data (dD) expressing respective distances from the TOF imaging system (110) to points on objects imaged by the imaging system (110) in the scene;determining if a shadow effect exists, by which a first object of the objects in the scene (100) obstructs light from at least one light source of the two or more light sources from reaching a part of a second object of the objects in the scene (100) and being reflected from the part of the second object to the TOF imaging system (110); anddetermining that the shadow effect exists and subsequently adjusting the distance data (dD) to compensate (dΔ) for the at least one light source for which the light did not reach the part of the second object.
  • 17. The method according to claim 16, further comprising: adjusting the distance data (dD) by modifying a piece of distance data (dD′) expressing a distance to a point on a surface (FLTS1) on the at least one part of the at least one second object (FLT) by an adaptation amount (dΔ), the piece of distance data (dD′) being determined without consideration of the shadow effect, and the adaptation amount (dΔ) depending on a determination via the imaging system of which one or ones of two or more light sources emit light that does not reach said point.
  • 18. The method according to claim 16, further comprising: obtaining an adaptation amount (dΔ) from a lookup table (1350) comprising a data set expressing an adaptation amount (dΔ) for each possible combination of light sources of the two or more light sources for which light does not reach said point for at least one distance expressed by the distance data (dD), andadjusting the distance data (dD) by modifying a piece of distance data (dD′) expressing a distance to a point on a surface (FLTS1) on the at least one part of the at least one second object (FLT) by the adaptation amount (dΔ), the piece of distance data (dD′) being determined without consideration of the shadow effect.
  • 19. The method according to claim 16, wherein the at least one first object (TC) has a known position and spatial extension relative to the TOF imaging system (110) and the two or more light sources,and the method further comprises: adjusting the distance data (dD) to compensate for the at least one of the two or more light sources for which light is obstructed from reaching the part of the second object behind the first object (TC).
  • 20. The method according to claim 19, wherein the scene (100) comprises a milking location, and the first object is a teat cup (TC) arranged on a carrying structure (RA) mechanically linked to the TOF imaging system (110).
  • 21. The method according to claim 19, wherein the scene (100) comprises a milking location, and the first object is a teat of a milking animal.
  • 22. The method according to claim 16, further comprising: determining a distance (dD) to a potentially shadowing object in the scene (100), said potentially shadowing object located at a shorter distance from the TOF imaging system (110) than any other object in the scene (100);applying a reverse ray-tracing algorithm to establish at least one sector in the scene (100) estimated to be shadowed by the potentially shadowing object with respect to light from at least one light source of the two or more light sources; andincluding the potentially shadowing object in a group of candidates from which to select the at least one first object when determining if the shadow effect exists.
  • 23. The method according to claim 22, further comprising: executing a first processing step wherein a spatial position of a further potentially shadowing object (RLT) in the scene (100) is determined, said further potentially shadowing object (RLT) located at a shorter distance from the TOF imaging system (110) than any other object in the scene (100) that has not yet been included in the group of candidates;executing a second processing step wherein said reverse ray-tracing algorithm is applied to establish at least one sector in the scene (100) estimated to be shadowed by the further potentially shadowing object (RLT) with respect to light from at least one light source of the two or more light sources; andexecuting a third processing step wherein the further potentially shadowing object (RLT) is included in the group of candidates from which to select the at least one first object when determining if the shadow effect exists.
  • 24. The method according to claim 23, further comprising: executing the first, second and third processing steps repeatedly until a stop criterion has been fulfilled.
  • 25. The method according to claim 24, further comprising: setting the stop criterion in response to at least one of: a time constraint, and a processing capacity constraint.
  • 26. The method according to claim 16, comprising: determining a first piece of the distance data (dD) expressing a first distance to a first surface area (ARLT1234) of the second object (RLT) which first surface area is located in a first sector (S1234) illuminated by all of the two or more light sources; anddetermining a second piece of the distance data (dD) expressing a second distance to a second surface area (ARLT23) of part of the second object (RLT), said second surface area is located in a second sector (S23) illuminated by a subset of the two or more light sources, and the determining of the second piece of the distance data (dD) includes extrapolating the first surface area (ARLT1234) into the second sector (S23).
  • 27. The method according to claim 26, wherein the extrapolating assumes that the second object (RLT) has a generally known shape.
  • 28. The method according to claim 27, wherein the second number is zero.
  • 29. The method according to claim 26, wherein the scene (100) comprises a milking location, and the second object (RLT) is a teat of a milking animal.
  • 30. The method according to claim 16, wherein the image data (Dimg) comprises data that for each pixel in a set of pixels expresses a light intensity value,and the method further comprises: adjusting the light intensity value of a pixel in the set of pixels by an adaptation intensity, said pixel representing a point on a surface (FLTS1) of the part of the second object being illuminated by light from less than all of the two or more light sources, the light intensity value being calculated without consideration of the shadow effect, and the adaptation intensity being proportional to a count of the light sources of the two or more light sources for which light is obstructed from reaching said point.
  • 31. A non-volatile, non-transitory data carrier (1326) having recorded thereon a computer program (1327) comprising processor-executable code that, when executed by a processing unit of a computing device, causes the computing device to carry out the method according to claim 16.
  • 32. (canceled)
Priority Claims (1)
Number Date Country Kind
2051188-7 Oct 2020 SE national
PCT Information
Filing Document Filing Date Country Kind
PCT/SE2021/050996 10/12/2021 WO