The present invention relates to a distance image capture system, and in particular, to a distance image capture system which adjusts an imaging number.
As distance measurement sensors which measure the distance to an object, TOF (time of flight) sensors, which output distance based on the time of flight of light, are known. TOF sensors irradiate a target space with reference light, which is intensity-modulated in predetermined cycles, and in many cases, a phase difference method (the so-called “indirect method”), in which a distance measurement value of the target space is output based on the phase difference between the reference light and light reflected from the target space, is adopted. This phase difference is obtained from the amount of reflected light received.
There are variations in the distance measurement values of such distance measurement sensors represented by TOF sensors. Though the main cause of distance measurement variations is shot noise in the case of TOF sensors, it is known that such distance measurement variations vary in a substantially normally distributed manner. Though it is effective to increase the integration time and the amount of light emitted by the TOF sensor in order to reduce variations, this solution has limitations in the specifications of the distance measurement sensor, such as restrictions on the amount of light received by the light-receiving element of the distance measurement sensor and restrictions on heat generation.
When detecting the position or posture of an object from a distance image, in order to maintain detection accuracy, it is desirable that the error of the distance image be equal to or less than a specified value. As another solution for reducing variability, the adoption of an averaging process in which the distance for each corresponding pixel in a plurality of distance images are averaged, a time filter such as an IIR (infinite impulse response) filter, or a spatial filter such as a median filter or a Gaussian filter may be considered.
Patent Literature 1 describes calculating, for a plurality of distance images captured while changing exposure step by step, the weighted average value of distance information of each pixel corresponding to the same pixel position to obtain a composite distance image which is composited so that the calculated weighted average value is the distance information of each pixel, wherein the calculation of the weighted average value uses a weighted coefficient which is calculated so as to correspond to the accuracy of the distance information according to the light receiving level information of the pixel.
Patent Literature 2 describes extracting pixels representing a greater received light intensity between a plurality of distance images acquired under different imaging conditions based on the received light intensity associated with each pixel in the distance images, and using the extracted pixels in a composite distance image of a plurality of distance images.
Patent Literature 3 describes acquiring a plurality of sets of image data having different imaging sensitivities for each predetermined unit area, executing in-plane HDR (high dynamic range) processing to generate image data with an expanded dynamic range by compositing the plurality of sets of image data, and performing control so that the direction in which more features of a target appear becomes the HDR processing direction.
The distance image imaging number used in the averaging processing, etc., described above is generally a predetermined fixed number. However, in composite processing of a fixed number of distance images, it becomes difficult to reduce distance measurement variations caused by changes of the target, whereby distance measurement accuracy becomes unstable.
Conversely, increasing the imaging number by giving a margin to the fixed number has been considered. However, in most cases, time will be wasted on image acquisition and image compositing. Thus, the imaging number of the distance image should be variable in accordance with the situation of the target.
Thus, there is a demand for a distance image composting technology which can realize stable distance measurement accuracy and reduction of wasted time, even if the target changes.
One aspect of the present disclosure provides a distance image capture system, comprising an image acquisition unit which acquires a plurality of first distance images by imaging a target multiple times from the same imaging position and the same imaging posture with respect to the target, and an image composition unit which generates a second distance image by compositing the plurality of first distance images, the system comprising an image count determination unit which estimates a distance measurement error in the second distance image and determines an imaging number of the first distance images so that the estimated distance measurement error becomes equal to or less than a predetermined target error.
According to the aspect of the present disclosure, since the imaging number is automatically adjusted, there can be provided an image compositing technology that achieves stable distance measurement accuracy and a reduction of wasted time, even if the target changes.
The embodiments of the present disclosure will be described in detail below with reference to the attached drawings. In the drawings, identical or similar constituent elements have been assigned the same or similar reference signs. Furthermore, the embodiments described below do not limit the technical scope of the invention described in the claims or the definitions of the terms. Note that the description “distance image” as used herein refers to an image in which distance measurement values from a distance measurement sensor to a target space are stored for each pixel, and the description “light intensity image” refers to an image in which light intensity values of the reflected light reflected in the target space are stored for each pixel.
The image acquisition unit 10 acquires a plurality of first distance images by imaging the target W multiple times from the same imaging position and the same imaging posture with respect to the target W. The image acquisition unit 10 preferably has a function of acquiring, in addition to the first distance images, light intensity images by capturing the target W from the same imaging position and the same imaging posture.
The host computing device 20 comprises an image composition unit 21 which generates a second distance image by compositing the plurality of first distance images acquired by the image acquisition unit 10. Though the image composition unit 21 generates the second distance image by averaging the plurality of first distance images for each corresponding pixel, it may generate the second distance image by applying, to the plurality of first distance images, a time filter such as an IIR filter, a spatial filter such as median filter, a Gaussian filter, or filter processing combining these. Such a composite distance image reduces distance measurement variations.
The host computing device 20 preferably further comprises an image area designation unit 24 which designates an image area of a composited target. The image area of the composited target may be, for example, a specific area of the target W (for example, a surface of the target W to be suctioned or a surface on which a predetermined operation (spot welding, sealing, fastening, etc.) is applied to the target W). The image area of the composited target may be manually designated by the user, or may be automatically designated by the host computing device 20. In the case of manual designation, for example, an input tool or the like for the user to designate the image area in the acquired distance image or light intensity image is preferably provided. By limiting the image area of the composited target, composition processing of the distance image can be accelerated.
The host computing device 20 may further comprise a target specification unit 25 which automatically specifies an image area in which at least a part of the target W is captured from the distance image or the light intensity image. As the method for specifying the target W, a known method such as matching processing such as pattern matching, blob analysis for analyzing feature amounts of the image, and clustering for classifying similar regions can be used. The specified image area is designated as the image area of the composited target by the image area designation unit 24.
The distance image capture system 1 can be applied to, for example, a robot system. The distance image capture system 1 further comprises a robot 40 and a robot controller 30 that controls the robot 40, and the robot controller 30 issues a second distance image request command to the host computing device 20, and can correct the motion of the robot 40 based on the second distance image (i.e., at least one of the position and posture of the target W; the same applies below) acquired from the host computing device 20.
In a robot system comprising a plurality of robots 40 and a plurality of robot controllers 30, it is preferable that the host computing device 20 be communicably connected to the robot controller 30 in a one-to-many manner. According to such a server configuration, the host computing device 20 side is responsible for high-load image processing, and the robot controllers 30 side can concentrate performance on control processing of the robots 40.
Though the robot 40 is an articulated robot, it may be another industrial robot such as a parallel link type robot. The robot 40 preferably further comprises a tool 41 which performs operations on the target W. The tool 41 may be a hand which grips the target W, or may be another tool which performs a predetermined operation (spot welding, sealing, fastening, etc.) on the target W. Though the target W is transported by a conveyance device 50 and arrives in the operation area of the robot 40, a system configuration in which targets W are stacked in bulk on a pallet (not illustrated) or the like may be adopted. The conveyance device 50 may be a conveyor, or may be another conveyance device such as an automated guided vehicle (AGV).
The image acquisition unit 10 is installed on the tip of the robot 40, but may be installed at a fixed point different from the robot 40. The robot controller 30 comprises a motion control unit 31 which controls the motion of the robot 40 and the tool 41 in accordance with a motion program generated in advance by a teaching device (not illustrated). When the target W arrives in the operation area of the robot 40, the motion control unit 31 temporarily stops the conveyance device 50 and issues a second distance image request command to the host computing device 20. However, a second distance image request command may be issued to the host computing device 20 while the tip of the robot 40 follows the motion of the target W.
When the conveyance device 50 is temporarily stopped, the image acquisition unit 10 acquires the plurality of first distance images of the stationary target W from the same imaging position and the same imaging posture. Conversely, when the robot 40 follows the motion of the target W, the image acquisition unit 10 acquires the plurality of first distance images of the moving target W from the same imaging position and the same imaging posture. The motion control unit 31 corrects the motion of at least one of the robot 40 and the tool 41, based on the second distance image acquired from the host computing device 20.
The host computing device 20 is characterized by comprising an image count determination unit 22 which determines a first distance image imaging number. Upon receiving a second distance image request command, the image count determination unit 22 issues an imaging command to the image acquisition unit 10 and acquires the plurality of first distance images. The image count determination unit 22 estimates the distance measurement error in the second distance image, and determines the first distance image imaging number so that the estimated distance measurement error becomes less than or equal to a predetermined target error. Note that instead of the imaging number, the image count determination unit 22 may determine the number of acquired first distance images that the image composition unit 21 acquires from the image acquisition unit 10, or alternatively, when the image composition unit 21 generates the second distance image using a time filter, the time constant of the time filter may be determined. There are two imaging number determination methods, such as a function method and a sequential method, and these two imaging number determination methods will be described in order below.
According to the function method, the distance measurement error σ1 in the first distance image can be estimated by acquiring the light intensity value s1 from the acquired light intensity image in a first imaging, and substituting the acquired light intensity value s1 into, for example, formula 1. Alternatively, the distance measurement error σ1 in the first distance image may be obtained without using such an approximation formula by performing linear interpolation, polynomial interpolation, etc., on a data table in which there are stored a plurality of relationships between the light intensity value s and distance measurement variation σ acquired experimentally in advance or at the time of factory calibration. Furthermore, since the distance measurement error σ1 in the first distance image has a generally normal distribution variation, it is known that the distance measurement variation of the second distance image, on which an averaging process was performed to average the distance for each corresponding pixel of the first distance image captured N times, is reduced by a reduction of 1/N0.5 by the central limit theorem of statistics. Specifically, when this distance measurement variation σ1/N0.5 is considered as the distance measurement error in the second distance image, the distance measurement error σ1/N0.5 in the second distance image can be estimated. Then, the imaging number N of the first distance images, in which the distance measurement error σ1/N0.5 in the estimated second distance image is equal to or less than the predetermined target error σTG, is determined. In other words, when the plurality of first distance images are averaged to generate a second distance image, it is possible to determine the imaging number N based on the following formula. It should be noted that different degrees of reduction are applied to the distance measurement error of the second distance image when a compositing process other than the illustrated averaging process is adopted.
Referring again to
Furthermore, when determining the imaging number, the image count determination unit 22 may estimate the distance measurement error in the second distance image in units of pixels in the light intensity images, or may estimate the distance measurement error in the second distance image in units of pixel regions in the light intensity images. Specifically, the image count determination unit 22 may estimate the distance measurement error in the second distance image based on the light intensity value of, for example, a specific pixel of target W, or may estimate the distance measurement error in the second distance image based on the average value or the lowest value of the light intensity value of a specific pixel region (for example, a 3×3 pixel region) of the target W.
Further, when determining the imaging number, at least one light intensity image may be acquired, or a plurality of light intensity images may be acquired. When a plurality of images are acquired, the image count determination unit 22 may estimate the distance measurement error in the second distance image based on the average value or the minimum value of the light intensity values of the corresponding pixels among the plurality of light intensity images, or may estimate the distance measurement error in the second distance image based on the average value or the lowest value of the light intensity values of the corresponding pixel regions (for example, 3×3 pixel regions) among the plurality of light intensity images. By using the light intensity value of more pixels in this manner, it is possible to estimate the distance measurement error in the second distance image (and thus the imaging number of the first distance images) with higher accuracy, or estimate the same so as to be less than or equal to the target error with high certainty.
In addition, when determining the imaging number, the target error σTG may be a predetermined fixed value, or may be a designated value designated by the user. In the case of a designated value, the distance image capture system 1 may further comprise a target error designation unit 23 which designates the target error σTG. For example, it is preferable that the user interface be provided with a numerical input field or the like for the user to designate the target error σTG. By enabling designation of the target error σTG, it is possible to generate the second distance image with the target error in accordance with a user request.
In step S12, the distance measurement error in the second distance image is estimated based on (the image area of) the light intensity image. The approximation formula 1, in which the relationship between the light intensity value s in (the image area of) the light intensity image and the distance measurement variation σ in the first distance image is represented, or linear interpolation or polynomial interpolation of a data table of light intensity value s and distance measurement variation σ is used in the estimation. At this time, the distance measurement error in the second distance image may be estimated in units of pixels in (the image area of) the light intensity image or in units of pixel regions in (the image area of) the light intensity image, or the distance measurement error in the second distance image may be estimated in units of corresponding pixels between (the image areas of) the plurality of light intensity images or in units of corresponding pixel regions between (the image areas of) the plurality of light intensity images.
In step S13, the distance measurement error σ1/N0.5 of the second distance image is estimated based on the estimated distance measurement error at of the first distance images and, for example, the reduction degree 1/N0.5 of the distance measurement error of the second distance image generated by averaging the plurality of first distance images, and the imaging number N for which the estimated distance measurement error σ1/N0.5 in the second distance image is equal to or less than the target error σTG is determined. When adopting filter processing other than averaging processing, different reduction degrees are adopted so as to determine the imaging number N.
In step S14, it is determined whether or not the current imaging number n has reached the determined imaging number N. When the current imaging number n has not reached the determined imaging number N in step S14 (NO in step S14), the process proceeds to step S15, a further first distance image is acquired (n=n+1), and in step S16, the process of compositing (the image areas of) the first distance images and generating the second distance image (by performing an averaging process or the like) is repeated. When the current imaging number n has reached the determined imaging number N in step S14 (YES in step S14), the compositing process of the first distance images is complete, and the second distance image at this time becomes the final second distance image.
Next, the imaging number determination method using the sequential method will be described. The distance measurement variation in the first distance images has a generally normally distributed variation, and when the distance measurement error in the first distance images to be estimated is expressed by its standard deviation σ, the distance measurement error of the second distance image, which is obtained by imaging the first distance image n times and averaging the distance for each corresponding pixel, is reduced to σn/n0.5. The following formula is obtained, considering that the distance measurement error σn/n0.5 in the second distance image reduced in this manner is equal to or less than the target error σTG.
When this formula is transformed, the following formula is obtained.
σn2 is a value referred to as statistical distribution, and when the average of n sets of data from x1 to xn is defined as μn, the distribution σn2 is as indicated in the following formula.
Here, the average σn and distribution σn2 can be obtained by sequentially calculating the data as shown in the following formulas, respectively.
Thus, every time the distance measurement value is obtained by imaging, by sequentially calculating the average μn and distribution σn2 and determining with determination formula 4, which represents the relationship between the distribution σn2 and the imaging number n, it can be estimated whether the distance measurement error σn/n0.5 of the average μn (i.e., the second distance image) is equal to or less than the target error σTG, whereby the imaging number n is automatically determined. If the composition method used is different and the degree of reduction of the distance measurement error with respect to the imaging number n is different, it is advisable to multiply the ratio of the degree of reduction by the right side of the determination formula 4 and perform judgment.
Furthermore, when determining the imaging number, though the imaging count determination unit 22 sequentially calculates the distribution of the distance measurement value σn2 in units of corresponding pixels between the plurality of first distance images, when compositing only the image area of the target W having a surface of a certain height when viewed from the distance measurement sensor 10, the distribution σn2 may be sequentially calculated in units of corresponding pixel regions (for example, 3×3 pixel regions) among the plurality of first distance images. By using the distance measurement values of more pixels in this way, the imaging number can be further reduced and wasted time can be reduced.
Further, when determining the imaging number, the target error σTG may be a predetermined fixed value, or may be a designated value designated by the user. For example, when the target error σTG is designated at 1 cm, since the right-hand side value αn2/12 of the determination formula 3 becomes the sequential distribution σn2 itself, the graph of
In step S22, a further first distance image is acquired (n=n+1), and in step S23, (the image areas of) the plurality of first distance images are composited to generate a second distance image (by performing an averaging process or the like). When the compositing process of the first distance images in step S23 is not an averaging process for averaging the distance for each corresponding pixel, the compositing process may be performed after determining the imaging number n (i.e., after step S25).
In step S24, the distribution σn2 of the distance required for estimation of the distance measurement error in the second distance image is sequentially calculated. At this time, the distribution σn2 may be calculated in units of corresponding pixels of (the image areas of) the plurality of first distance images or in units of corresponding pixel regions in (the image areas of) the plurality of first distance images.
In step S25, it is determined whether the imaging number n satisfies the determination formula 4 representing the relationship between the sequentially calculated distribution σn2 and the imaging number n. Specifically, by determining the end of acquisition of first distance images, the imaging number n of the first distance image is automatically determined.
When the imaging number n does not satisfy determination formula 4 in step S25 (NO in step S25), the process returns to step S22 and a further first distance image is acquired.
When the imaging number n satisfies the determination formula 4 in step S25 (YES in step S25), the acquisition of first distance images is ended, and the second distance image at this time becomes the final second distance image.
Contrary to the original distance measurement value variation, when the first few distance measurement values are accidentally similar values, there is a risk that the sequentially calculated distribution σn2 becomes smaller and the determination formula 4 is satisfied even though the error of the second distance image is not less than the desired value. In order to eliminate this risk, a determination step of n K (where K is the minimum imaging number) may be provided before the determination in step S25.
The loop from step S22 to step S25 may be continued until the determination formula 4 is established for all pixels of the entire regions of the first distance images or the image area designated in step S21, or in consideration of pixel failure, the loop may be exited when the determination formula 4 is established with a predetermined ratio of pixels to the number of pixels in the image area, or alternatively, a maximum imaging number may be designated and the loop may be exited when the maximum imaging number is exceeded. Thus, the distance image capture system 1 may comprise a minimum imaging number designation unit, an establishment ratio designation unit for designating an establishment ratio of determination formula 4, and a maximum imaging number designate unit. For example, it is preferable that the user interface be provided with a numerical input field or the like for the user to designate these.
Next, a modified example of designating the degree of reliability of the distance measurement error in the second distance image will be described. Generally, when the variation of values is normally distributed, though the mean value can be estimated with high accuracy by increasing the number of samples, an error remains with respect to the true mean value. Thus, statistically, the relationship of the confidence interval with the margin of error E, the number of samples n, and the deviation a is defined.
Thus, in the case of the function method, the imaging number N for achieving the target error σTG with a degree of reliability of 95% can be obtained from the estimated distance measurement error σ1 in the first distance image by the following formula.
Similarly, in the sequential method, whether or not the imaging number n achieves the target error σTG with a degree of reliability of 95% can be determined by the following formula.
Thus, in the case of a 95% confidence interval, the confidence coefficient is 1.96, in the case of a 90% confidence interval, the confidence coefficient becomes 1.65, and in the case of a 99% confidence interval, the confidence coefficient becomes 2.58. Further, when the confidence coefficient is 1, the confidence interval is 68.3%. Thus, the imaging number determined by the function method and sequential method described above is an imaging number in which the estimated distance measurement error is equal to or less than the target error σTG at a 68.3% of degree of reliability.
Designating with a degree of reliability added to the target error in this manner enables more intuitive designation with respect to tolerance, whereby a second distance image having a degree of reliability corresponding to the request of the user can be generated. Referring again to
The programs executed by the processor described above and the programs for executing the flowcharts described above may be recorded and provided on a computer-readable non-transitory recording medium such as a CD-ROM, or may be distributed and provided wired or wirelessly from a server device on a WAN (wide area network) or LAN (local area network).
According to the embodiment described above, since the imaging number is automatically adjusted, there can be provided an image compositing technology which achieves stable distance measurement accuracy and a reduction of wasted time, even if the target W changes.
Though various embodiments have been described herein, it should be noted that the invention is not limited to the embodiments described above and can be modified within the scope described in the claims.
Number | Date | Country | Kind |
---|---|---|---|
2020-043475 | Mar 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/009022 | 3/8/2021 | WO |