The present invention relates to a processing device, a processing method, a system, and an article manufacturing method.
A method of measuring a distance by projecting pattern light onto an object using a projection unit such as a projector and specifying a position of the pattern light from image data obtained by imaging the object using an imaging unit such as a camera is known. A luminance value of the pattern light in the image data can be too high or too low due to conditions of the object such as reflectivity of a surface of the object and a posture of the object, and thereby it can be difficult to specify a position of the pattern light with high accuracy. In order to specify a position of the pattern light with high accuracy regardless of the conditions of the object, it is necessary to appropriately adjust at least one of illuminance of the pattern light on a surface of the object and an exposure amount in the imaging unit.
There is a method of measuring a three-dimensional shape in which a difference between a peak value of the luminance of projected light and a peak value of the luminance of background light in image data is adjusted by adjusting an exposure amount of an imaging unit and an optical cutting line is extracted (Japanese Patent Laid-Open No. 2009-250844). In addition, there is a method of improving recognizability of details of image data by adjusting a proportion of pixels having luminance equal to or greater than a high luminance threshold value and a proportion of pixels having luminance equal to or less than a low luminance threshold value in the image data under exposure amount control (Japanese Patent No. 4304610).
However, since it is difficult to clearly separate between projected light and background light in a measurement method by pattern light projection, the method of Japanese Patent Laid-Open No. 2009-250844 is difficult to apply. In addition, adjustment of a luminance value in image data cannot be regarded to be sufficient without considering a luminance distribution of medium luminance pixels between the high luminance threshold value and the low luminance threshold value in the method of Japanese Patent No. 4304610.
The present invention provides, for example, a processing device which is advantageous in terms of measurement accuracy and obtains a luminance value distribution in image data.
In a processing device of the present invention, which includes an imaging unit for obtaining image data by imaging an object and a control unit for controlling the imaging unit, the control unit determines a condition for imaging on the basis of a magnitude of a luminance distribution corresponding to the image data.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereinafter, embodiments for realizing the present invention will be described with reference to the drawings.
The projection unit 20 includes a light source 21, an illumination optical system 22, a display element 23, a projection diaphragm 24, and a projection optical system 25. As the light source 21, various types of light emitting elements such as a halogen lamp and an LED are used. The illumination optical system 22 is an optical system having functions of making a uniform light intensity of light emitted from the light source 21 and guiding it to the display element 23, and an optical system such as a Koehler illumination, a diffuse plate, or the like is used.
The display element 23 is an element having a function of spatially controlling the transmittance or reflectivity of light from the illumination optical system 22 in accordance with a predetermined pattern of the light projected onto the object W. For example, a transmissive liquid crystal display (LCD), a reflective liquid crystal on silicon (LCOS), a digital micro-mirror device (DMD), and the like are used. The predetermined pattern is generated by the control unit 40 and output to the display element 23. In addition, the pattern may be generated by a device different from the control unit 40 and may also be generated by a device not shown in the projection unit 20.
The projection diaphragm 24 is used to control an F value of the projection optical system 25. As the F value is small, a light amount of light passing through a lens in the projection optical system 25 increases. The projection optical system 25 is an optical system which is configured to image light guided from the display element 23 at a specific position of the object W.
The imaging unit 30 includes an imaging element 31, an imaging diaphragm 32, and an imaging optical system 33. As the imaging element 31, various types of photoelectric conversion elements such as a CMOS sensor and a CCD sensor are used. An analog signal photo-electrically converted using the imaging element 31 is converted into a digital image signal by a device not shown in the imaging unit 30. The device generates an image (captured image) constituted of pixels having a luminance value based on a digital image signal, and outputs the generated captured image to the control unit 40. The imaging diaphragm 32 is used to control an F value of the imaging optical system 33. The imaging optical system 33 is an optical system configured to image a specific position of the object W on the imaging element 31.
The imaging unit 30 captures the object W every time a pattern of the light projected onto the object W from the projection unit 20 is changed, and acquires a captured image for light in each of a plurality of patterns. The control unit 40 causes the imaging unit 30 and the projection unit 20 to operate in synchronization with each other.
The control unit 40 includes a determination unit 41, an adjustment unit 42, and a position posture calculation unit 43. The determination unit 41 determines measurement conditions (referred to as an imaging conditions) including the exposure amount of the imaging unit 30 at the time of measuring the object W and the illuminance of the light projected by the projection unit 20 on the basis of a spread magnitude of the luminance value distribution of pixels constituting a captured image acquired by the imaging unit 30. Here, the spread magnitude indicates a spread of the magnitude of the luminance value distribution such as size, largeness, area and dimension. The adjustment unit 42 adjusts at least one of the projection unit 20 and the imaging unit 30 so that a spread magnitude of the luminance value distribution is within an allowable range on the basis of the measurement conditions determined by the determination unit 41.
The exposure amount of the imaging unit 30 is adjusted by adjusting at least one of an exposure time (referred to as a shutter speed) under control of the imaging element 31 and an F value of the imaging optical system 33. The luminance value of each pixel constituting a captured image increases as the exposure time is extended. In addition, as the F value decreases, the luminance value of each pixel increases. Since a depth of field of the imaging optical system 33 changes by control of the imaging diaphragm 32, the imaging diaphragm 32 is controlled in consideration of an amount of the change.
The illuminance of the light projected onto the object W by the projection unit 20 is adjusted by adjusting any one of an emission luminance of the light source 21, a display gradation value of the display element 23, and an F value of the projection optical system 25. If a halogen lamp is used as the light source 21, as an applied voltage increases, the emission luminance increases and the illuminance increases. If an LED is used as the light source 21, as a current flowing in the LED increases, the emission luminance increases and the illuminance increases.
If the transmissive LCD is used as the display element 23, as the display gradation value increases, the transmittance increases and the illuminance increases. If the reflective LCOS is used as the display element 23, as the display gradation value increases, the reflectivity increases and the illuminance increases. If the DMD is used as the display element 23, as the display gradation value increases, the number of times of ON per frame increases and the illuminance increases.
As the F value decreases, the illuminance of the light projected onto the object W increases. Since the depth of field of the projection optical system 25 changes by control of the projection diaphragm 24, the projection diaphragm 24 is controlled in consideration of an amount of the change.
The position posture calculation unit 43 calculates a three-dimensional shape of the object W from a distance image captured by the imaging unit 30. The distance image is obtained by imaging the object W onto which light is projected in a line pattern in which a bright portion formed of bright lines and a dark portion formed of dark lines are alternately and periodically arranged. In addition, the position posture calculation unit 43 calculates a two-dimensional shape of the object W from a grayscale image captured by the imaging unit 30. The grayscale image is obtained by imaging the object W which is uniformly illuminated. The three-dimensional shape is obtained, for example, by calculating a distance from the imaging unit 30 to the object W using a space coding method. The position posture calculation unit 43 obtains a position and a posture of the object W using the three-dimensional shape and a CAD model of the object W.
In the space coding method used in the present embodiment, first, a waveform consisting of the luminance values of pixels constituting a captured image of the object W onto which light is projected in a line pattern (hereinafter referred to as positive pattern light) including a bright portion and a dark portion is obtained. Next, a waveform consisting of the luminance values of pixels constituting a captured image of the object W onto which light is projected in a line pattern (hereinafter referred to as negative pattern light) in which the bright portion and the dark portion in the line pattern light are inverted is obtained. A plurality of intersection positions between the two obtained waveforms are regarded as positions of the line pattern light. In addition, in the positive pattern, a spatial code “1” is given to the bright portion and a spatial code “0” is given to the dark portion. The same processing is performed while a width of the line pattern light is changed. It is possible to determine an emission direction (projection direction) of the line pattern light from the projection unit 20 by combining and decoding spatial codes with different widths of light. A distance from the imaging unit 30 to the object W is calculated on the basis of this emission direction and the position of the line pattern light.
The determination unit 41 determines a measurement condition including illuminance of light and an exposure amount so that luminance values in the plurality of intersection positions between the two waveforms fall within a predetermined range. If a luminance value at an intersection position is calculated based on luminance values in the vicinity of the intersection position, it is desirable to determine a measurement condition so that the highest luminance value among the luminance values in the vicinity of the intersection position falls within the predetermined range. If a two-dimensional shape is calculated using a grayscale image, for example, a measurement condition is determined so that luminance values of the entire grayscale image fall within the predetermined range. Cases in which the luminance values are outside of the predetermined range include, for example, a case in which luminance of intersection positions is too low and blackened and a case in which the luminance is too high and saturated.
In order to improve the accuracy in specifying a position of pattern light, the luminance distribution needs to spread over an entirety of the effective luminance region. Furthermore, if the number of pixels included in the effective luminance region is increased as much as possible, it is possible to improve the accuracy. As an evaluation criterion of the luminance value distribution, a standard deviation of the luminance value distribution and other evaluation criteria can also be applied. Here, three other evaluation criteria are exemplified.
The first evaluation criterion is the number of bins whose frequencies are equal to or greater than a predetermined value (for example, more than one fourth of a frequency maximum value) among bins included in the effective luminance region. As the number of bins increases, it is possible to evaluate that the luminance value distribution spreads to the entirety of the effective luminance region (the spread of the luminance value distribution is large, the bias of the luminance value distribution is small, and the uniformity (flatness) of the luminance value histogram is high).
The second evaluation criterion is the number of bins whose frequencies are included in a predetermined range among the bins included in the effective luminance range. In the same manner as the first evaluation criterion, as the number of bins increases, it is possible to evaluate that the spread of the luminance value distribution is large. The third evaluation criterion is an entropy value in the case in which a luminance histogram of a two-dimensional image is regarded as a probability distribution. Entropy is a statistical measure of randomness and is defined by −Σp·log2(p). Here, p is a frequency of the luminance value histogram normalized so that a sum is one. As the entropy increases, the spread of the luminance value distribution can be evaluated to be large.
The number of pixels included in the effective luminance region can be evaluated, for example, on the basis of a proportion of pixels included in the effective luminance region among all pixels constituting a captured image. That is, a value obtained by dividing a result of subtracting the number of pixels included in the blackened region and the number of pixels included in the saturated region from the total number of pixels by the total number of pixels is the proportion of pixels included in the effective luminance region.
The curved line L1 may also be obtained by normalizing the number of bins whose frequencies are equal to or greater than a predetermined value among the bins included in the effective luminance region, the number of bins whose frequencies are included in a predetermined range among the bins included in the effective luminance region, the entropy (described above) of the luminance value histogram, and the like. In addition, the curved line L2 represents a change in the proportion of pixels included in the effective luminance region, which corresponds to each of the plurality of captured images.
If the optimum measurement condition is obtained only from the curved line L1, an exposure amount at which the curved line L1 has a maximum value is the optimum measurement condition. Furthermore, if the optimum measurement condition is obtained in consideration of the curved line L2, an exposure amount at which the curved line L3 (for example, L1×L2) obtained by combining the curved line L1 and the curved line L2 is a maximum value is the optimum measurement condition.
In a process of S105, the determination unit 41 obtains a plurality of intersections between a luminance value distribution of pixels constituting a captured image captured in the process of S103 and a luminance value distribution of pixels constituting a captured image captured in the process of S104. A process of S106 and a process of S107 are flows for obtaining the curved line L1 described above, and a process of S108 to a process of S111 are flows for obtaining the curved line L2 described above. These flows may progress in parallel and may also progress separately and sequentially.
In the process of S106, the determination unit 41 classifies the plurality of intersections obtained in the process of S105 into respective sections of the luminance value histogram on the basis of luminance values of the plurality of intersections. In a process of S107, the determination unit 41 calculates a standard deviation of the histogram.
In a process of S108, the determination unit 41 counts a total number of the plurality of intersections obtained in the process of S105. In a process of S109, the determination unit 41 counts the number of intersections included in the blackened region. In a process of S110, the determination unit 41 counts a total number of intersections included in the saturated region. In a process of S111, the determination unit 41 calculates a proportion of intersections included in the effective luminance region by dividing a result of subtracting the number of intersections included in the blackened region and the number of intersections included in the saturated region from the total number of intersections by the total number of intersections.
In a process of S112, the determination unit 41 determines whether imaging for N types of exposure times determined in the process of S101 has been completed (i<N). If it is determined that imaging has not been completed, one is added to i in a process of S113 and a next exposure time is set in the process of S102. If it is determined that imaging has been completed (i=N), in a process of S114, the determination unit 41 obtains a maximum value of the standard deviation obtained in the process of S107. In a process of S115, the determination unit 41 normalizes the standard deviation obtained in the process of S107 using the maximum value obtained in the process of S114. In a process of S117, the determination unit 41 performs multiplication of the proportion obtained in the process of S111 and the normalized value of the standard deviation obtained in the process of S115 and sets an exposure time at which a result of the multiplication is a maximum value to an optimum exposure time.
As described above, the processing device 1 of the present embodiment can determine an appropriate measurement condition regardless of reflectivity of the surface of the object W. As a result, according to the present embodiment, it is possible to provide a processing device which obtains a luminance value distribution in image data, which is advantageous in terms of measurement accuracy.
In the first embodiment, a case of measuring a position of line pattern light in calculation of a three-dimensional shape using a space coding method was mainly described. In the present embodiment, a case of calculating a two-dimensional shape using a grayscale image obtained by projecting uniform light onto the object W will be described. The present embodiment is different from the first embodiment in that pixels constituting one captured image instead of two luminance distribution intersections corresponding to two captured images are used to determine a measurement condition.
In the processes of S106 and S107, a standard deviation of a histogram created on the basis of luminance of an intersection was calculated. On the other hand, in the present embodiment, a histogram is created on the basis of luminance of pixels constituting a captured image obtained in a process of S203 (a process of S204), and a standard deviation is calculated (a process of S205).
In the process of S108 to the process of S111, a proportion of effective pixels was calculated on the basis of the total number of intersections. In the present embodiment, the total number of pixels constituting the captured image obtained in the process of S203 (a process of S206), the number of pixels in a blackened region (a process of S207), and the number of pixels in a saturated region (a process of S208) are each counted, and a proportion of effective pixels is calculated (a process of S209). A process of S210 and subsequent processes are the same as the process of S112 and subsequent processes.
As described above, even in the grayscale image obtained by projecting uniform light, it is possible to determine an appropriate measurement condition regardless of reflectivity of the surface of the object W, and the present embodiment also has the same effect as the first embodiment.
The processing device described above is used in a state in which it is supported by a support member. In the present embodiment, as an example, a control system which is installed in a robot arm 400 (gripping device) and used as shown in
In the button region 322, buttons for selecting a type of image to be displayed in the image display region 321 are disposed.
If a check box of the saturated region of the button region 322 is activated, the saturated region is highlighted as indicated by diagonal lines in
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2017-017601, filed Feb. 2, 2017, which is hereby incorporated by reference wherein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2017-017601 | Feb 2017 | JP | national |