The present application claims priority under 35 U.S.C. §119 to Japanese Patent Application No. 2013-004888, filed Jan. 15, 2013. The contents of this application are incorporated herein by reference in their entirety.
1. Field of the Invention
The present invention relates to a recognition program evaluation device and to a method for evaluating a recognition program.
2. Discussion of the Background
Japanese Unexamined Patent Application Publication No. 2011-22133 recites a recognition device that generates an algorithm (recognition program) of a plurality of scripts combined to recognize a workpiece.
According to one aspect of the present embodiment, a recognition program evaluation device includes an imaginary data acquisition portion, an imaginary recognition portion, a recognition evaluation portion, and a result display portion. The imaginary data acquisition portion is configured to generate or acquire imaginary scene data that includes position data of each of a plurality of workpieces and indicates the plurality of workpieces in a randomly stacked state. The imaginary recognition portion is configured to recognize each of the plurality of workpieces indicated in the randomly stacked state using a recognition program including at least one parameter set to adjust recognition of the plurality of workpieces. The recognition evaluation portion is configured to compare the position data of each of the plurality of workpieces with a result of recognition of each of the plurality of workpieces recognized by the imaginary recognition portion so as to evaluate recognition performance of the recognition program. The result display portion is configured to cause a display of a result of evaluation of the recognition performance of the recognition program evaluated by the recognition evaluation portion.
According to another aspect of the present embodiment, a method for evaluating a recognition program includes generating or acquiring imaginary scene data that includes position data of each of a plurality of workpieces and indicates the plurality of workpieces in a randomly stacked state. Each of the plurality of workpieces in the imaginary scene data is recognized using a recognition program including a parameter set to adjust recognition of the plurality of workpieces. The position data of each of the plurality of workpieces is compared with a result of recognition of each of the plurality of workpieces so as to evaluate recognition performance of the recognition program. A result of evaluation of the recognition performance of the recognition program is displayed.
A more complete appreciation of the present disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
The embodiments will now be described with reference to the accompanying drawings, wherein like reference numerals designate corresponding or identical elements throughout the various drawings.
By referring to
As shown in
For the robot 2 to grip randomly stacked workpieces 200 (see
As shown in
The sensor unit 4 uses a measurement unit (not shown) including a camera to pick up an image of the plurality of workpieces 200 randomly stacked in the stocker 5, and acquires a three-dimensional image (distance image) that includes pixels of the picked up image and distance information corresponding to the pixels. Based on the acquired distance image, the sensor unit 4 recognizes three-dimensional positions and postures of the workpieces 200 using a recognition program. The recognition program includes scripts (commands) indicating functions to perform image processing and blob analysis, among other processings. The plurality of scripts are arranged with parameters that are set as conditions under which the scripts are executed. Thus, adjustments are made for a recognition program (algorithm) suitable for the shapes of the workpieces 200. That is, depending on conditions such as the shapes of the workpieces 200, the recognition program needs adjustment of the order of the scripts and adjustment of the parameters so as to accurately recognize the workpieces 200.
Here, in the first embodiment, the model editor portion 111 (the control portion 11) generates imaginary scene data that includes position data of each of the plurality of workpieces 200 and indicates the plurality of workpieces 200 in a randomly stacked state. Specifically, the model editor portion 111 uses three-dimensional data of one workpiece 200a (see
More specifically, the model editor portion 111 (the control portion 11) has its sample image generation portion 111a read three-dimensional CAD data (sample image) of the one workpiece 200a, and builds a random stack of a plural of the one workpiece 200a, thereby generating the imaginary scene data. In this respect, the model editor portion 111 has its dictionary data generation portion 111b acquire the position and posture of each of the plurality of randomly stacked workpieces 200a as position data of each of the plurality of workpieces 200a. Specifically, the dictionary data generation portion 111b acquires three-dimensional coordinates (X coordinate, Y coordinate, and Z coordinate) of each of the workpieces 200a, and acquires three-dimensional postures (rotational elements RX, RY, and RZ) of each of the workpieces 200a. Also the dictionary data generation portion 111b has the storage portion 12 store the position data of each of the plurality of workpieces 200a as dictionary data.
The model editor portion 111 (the control portion 11) also generates a plurality of pieces of imaginary scene data. That is, the model editor portion 111 generates various patterns of imaginary scene data.
Also in the first embodiment, the imaginary evaluation portion 112 (the control portion 11) recognizes the workpieces 200a in the imaginary scene data using a recognition program to recognize the workpieces 200a, and evaluates the result of recognition of each of the workpieces 200a. Specifically, the recognition portion 112a of the imaginary evaluation portion 112 uses a recognition program that includes parameters (see
Also the recognition portion 112a (the control portion 11) compares the position data (dictionary data) of each of the workpieces 200a with the result of recognition of each of the workpieces 200a so as to evaluate the recognition performance of the recognition program. Specifically, the recognition portion 112a uses a recognition program in which a parameter has been set by a user to recognize each individual workpiece 200a in the plurality of pieces of imaginary scene data (see
For example, the recognition portion 112a (the control portion 11) obtains a success ratio or a reproductivity ratio, among other exemplary evaluation values. The success ratio is represented by the equation: success ratio (%)=(the number of successfully detected workpieces 200a among the total of the workpieces 200a detected from all the pieces of scene data)/(the total of the workpieces 200a detected from all the pieces of scene data)×100. That is, the success ratio implies certainty and reliability of the result of detection of the workpieces 200a. Thus, the success ratio is effective when used as an evaluation indicator in production lines where certainty and reliability of the result of detection of the workpieces 200a are critical.
The reproductivity ratio is represented by the equation: reproductivity ratio (%)=(the number of successfully detected workpieces 200a)/(the number of workpieces 200a targeted for recognition in all the pieces of scene data)×100. That is, the reproductivity ratio implies the degree of detectability, indicating how many workpieces 200a existing in the scene data are detected. Thus, the reproductivity ratio is effective when used as an evaluation indicator in cases where as many workpieces 200a as possible are desired to be detected by one scanning (imaging) (that is, the number of scannings is to be decreased for the purpose of shortening the tact time), and where as many candidates as possible are desired to be detected in main processing, followed by post-processing where a selection is made among the candidates.
Also the recognition portion 112a (the control portion 11) uses a plurality of different evaluation standards to evaluate the recognition performance of the recognition program. For example, as shown in
As shown in
In the example shown in
Interference is used to evaluate whether the detected workpieces 200a are actually grippable by the robot 2. Specifically, an interference ratio is represented by: interference ratio (%)=(the number of interference areas)/(the number of gripping areas)×100. When the detected workpieces 200a are lowest in their average or maximum interference ratio, the detected workpieces 200a are evaluated as being easy to grip. The number of gripping areas indicates the number of gripping areas (see
Accuracy indicates an error (difference and variation) between: the position (Xd, Yd, Zd) and posture (RXd, RYd, RZd) in the result of recognition of each of the workpieces 200a; and the position (Xc, Ye, Zc) and posture (RXc, RYc, RZc) of the position data of each of the workpieces 200a (correct data) in the dictionary data. The error is used to evaluate accuracy. That is, accuracy is an evaluation standard by which to evaluate whether the position and posture of the workpiece 200a are recognized more accurately. Thus, accuracy is effective when used as an evaluation indicator in cases where a more accurate grip is critical, such as in an assembly step.
The result display portion 112b (the control portion 11) has the display portion 13 display a result of evaluation of the recognition performance of the recognition program evaluated by the recognition portion 112a. For example, as shown in
The script portion 113 (the control portion 11) sets the scripts (processings) (see
As shown in
Next, by referring to
When three-dimensional CAD data of a workpiece 200a targeted for recognition is input by the user's operation, then at step S1 the control portion 11 reads and acquires the three-dimensional CAD data of the single workpiece 200a targeted for recognition (see
At step S3, the control portion 11 calculates the position data (correct position and posture) and status (scattering ratio and loss ratio) of all workpieces 200a targeted for recognition existing in the prepared N pieces of scene data. The control portion 11 also has the storage portion 12 store the calculated position data and status as dictionary data.
At step S4, the control portion 11 accepts designation of the recognition program targeted for evaluation. Specifically, the control portion 11 accepts, by the user's operation, setting of the scripts (processings) of the recognition program and the parameters of the scripts. At step S5, the control portion 11 sets i at i=1. At step S6, the control portion 11 executes the recognition program with respect to i-th scene data to recognize the workpieces 200a and acquire the result of recognition.
At step S7, the control portion 11 determines whether i is smaller than N. That is, the control portion 11 determines whether the recognition program has been executed with respect to all of the N pieces of scene data. When i<N (that is, when the recognition program has not been executed with respect to all of the N pieces of scene data), then at step S8 the control portion 11 sets i at i=i+1 and returns to step S6. When i=N (that is, when the recognition program has been executed with respect to all of the N pieces of scene data), then at step S9 the control portion 11 aggregates (evaluates) the acquired results of recognitions and displays the results on the display portion 13 (see
In the first embodiment, as described above, the recognition portion 112a recognizes each of a plurality of workpieces 200a in the imaginary scene data that is generated by the model editor portion 111 and that indicates the plurality of workpieces 200a in a randomly stacked state. The recognition portion 112a also compares the position data of each of the workpieces 200a with the result of recognition of each of the workpieces 200a so as to evaluate the recognition performance of the recognition program. This ensures that accurate positions and postures of the plurality of workpieces 200a in the imaginary scene data acquired from the position data of each of the plurality of workpieces 200a are automatically compared with the result of recognition by the recognition portion 112a. This, in turn, reduces burden to the user in evaluating the recognition program to recognize workpieces 200a, as compared with the case of the user having to make a visual comparison between the workpieces 200a actually stacked in a random manner and the result of recognition in an attempt to evaluate the recognition performance of the recognition program. The imaginary scene data is generated without using actual workpieces 200. This ensures evaluation of the recognition program without using actual machines (such as the robot 2, the robot controller 3, and the sensor unit 4), but only by a simulation using the PC 1 alone. This, in turn, ensures adjustment of the recognition program in advance on the PC 1, and shortens the time to adjust the recognition program using the actual machines (the robot 2, the robot controller 3, and the sensor unit 4).
Also in the first embodiment, as described above, the recognition portion 112a (the control portion 11) obtains evaluation values (success ratio and reproductivity ratio) to be used to evaluate the recognition performance of the recognition program, and the result display portion 112b (the control portion 11) displays the evaluation values (success ratio and reproductivity ratio). This ensures that the user is notified of the recognition performance of the recognition program in the form of the evaluation values (success ratio and reproductivity ratio). This facilitates the user's adjustment of the parameters of the recognition program based on the evaluation values.
Also in the first embodiment, as described above, the recognition portion 112a (the control portion 11) evaluates the recognition performance of the recognition program using a plurality of different evaluation standards. The result display portion 112b (the control portion 11) displays results of evaluations that have used the plurality of different evaluation standards. This ensures adjustment of the parameters of the recognition program based on results of evaluations that have used evaluation standards corresponding to different applications of recognition of the workpieces 200a (applications of the robot system 100).
Also in the first embodiment, as described above, the model editor portion 111 (the control portion 11) generates imaginary scene data indicating a plurality of workpieces 200a in a randomly stacked state using three-dimensional data of one workpiece 200a. This facilitates generation of imaginary scene data in accordance with how many pieces of the to-be-recognized workpiece 200a are to be randomly stacked, in accordance with the shape of the to-be-recognized workpiece 200a, or in accordance with other features of the to-be-recognized workpiece 200a. This, in turn, ensures accurate evaluation of the recognition program.
Also in the first embodiment, as described above, the model editor portion 111 (the control portion 11) generates a plurality of pieces of imaginary scene data. The recognition portion 112a (the control portion 11) recognizes the workpieces 200a in the plurality of pieces of imaginary scene data using a recognition program, and compares the position data of each of the workpieces 200a with results of recognitions of the workpieces 200a in the plurality of pieces of imaginary scene data so as to evaluate the recognition performance of the recognition program. This ensures use of various patterns of imaginary scene data of randomly stacked workpieces 200a to evaluate the recognition program. This, in turn, increases the accuracy of evaluation of the recognition program.
Next, by referring to
Here, in the second embodiment, the recognition portion 112a (the control portion 11) recognizes each of the workpieces 200a in the imaginary scene data while changing the parameters of the recognition program. Specifically, as shown in
The recognition portion 112a (the control portion 11) also recognizes each of the workpieces 200a in the imaginary scene data recognition program while changing the plurality of parameters (for example, X, Y, Z, RX, RY, RZ). That is, by changing the parameters of the recognition program, the recognition portion 112a estimates a combination of parameters that could realize higher recognition performance. The recognition portion 112a also compares the position data of each of the workpieces 200a with the result of recognition of each of the workpieces 200a so as to evaluate the recognition performance of the recognition program for each of the parameters (see
As shown in
Next, by referring to
When three-dimensional CAD data of a workpiece 200a targeted for recognition is input by the user's operation, then at step S11, the control portion 11 reads and acquires the three-dimensional CAD data of the single workpiece 200a targeted for recognition. At step S12, from the three-dimensional CAD data of the workpieces 200a, the control portion 11 prepares Ns pieces of scene data of random stacks.
At step S13, the control portion 11 calculates the position data (correct position and posture) and status (scattering ratio and loss ratio) of all workpieces 200a targeted for recognition existing in the prepared Ns pieces of scene data. The control portion 11 also has the storage portion 12 store the calculated position data and status as dictionary data.
At step S14, the control portion 11 accepts designation of the recognition program targeted for evaluation. Specifically, the control portion 11 accepts, by the user's operation, setting of the scripts (processings) of the recognition program and the parameters of the scripts. At step S15, the control portion 11 accepts, by the user's operation, designation (selection) of parameters of the recognition program targeted for estimation and setting of estimate ranges (upper limit, lower limit, and graded degrees).
At step S16, from the designated parameters and the estimate ranges, the control portion 11 generates combinations (parameter sets P1 to PNp) of all (Np) parameters. At step S17, the control portion 11 sets j at j=1, and at step S18, sets kat k=1. At step S19, the control portion 11 executes the recognition program with respect to k-th scene data at j-th parameter set Pj to recognize the workpieces 200a and acquire the result of recognition.
At step S20, the control portion 11 determines whether k is smaller than Ns. That is, the control portion 11 determines whether the recognition program has been executed with respect to all of the Ns pieces of scene data. When k<Ns (that is, when the recognition program has not been executed with respect to all of the Ns pieces of scene data), then at step S21, the control portion 11 sets k at k=k+1 and returns to step S19. When k=Ns (that is, when the recognition program has been executed with respect to all of the Ns pieces of scene data), then the control portion 11 proceeds to step S22.
At step S22, the control portion 11 determines whether j is smaller than Np. That is, the control portion 11 determines whether the recognition program has been executed with respect to all of the combinations (parameter sets) of the Np parameters. When j<Np (that is, when the recognition program has not been executed with respect to all of the Np parameter sets), then at step S23, the control portion 11 sets j at j=j+1 and returns to step S18. When j=Np (that is, when the recognition program has been executed with respect to all of the Np parameter sets), then at step S24, the control portion 11 aggregates (evaluates) the acquired results of recognitions and displays the results on the display portion 13 (see
The second embodiment is otherwise similar to the first embodiment.
In the second embodiment, as described above, the recognition portion 112a (the control portion 11) recognizes each of the workpieces 200a in the imaginary scene data while changing the parameters of the recognition program. Also the recognition portion 112a (the control portion 11) compares the position data of each of the workpieces 200a with the result of recognition of each of the workpieces 200a so as to evaluate the recognition performance of the recognition program for each of the parameters. Thus, the recognition portion 112a changes the parameters of the recognition program so that the recognition program is evaluated for each of the parameters. This reduces burden to the user as compared with the case of the user having to manually change the parameters.
Also in the second embodiment, as described above, the recognition portion 112a (the control portion 11) recognizes each of the workpieces 200a in the imaginary scene data while changing a parameter of the recognition program between a lower limit and an upper limit set by the user. Thus, when each of the workpieces 200a in the imaginary scene data is to be recognized for each parameter so as to evaluate the recognition program, the parameter is changed between its lower limit and upper limit set by the user, in evaluation of the recognition program. This shortens the time for processing as compared with changing the parameter over its entire range.
Also in the second embodiment, as described above, the recognition portion 112a (the control portion 11) recognizes each of the workpieces 200a in the imaginary scene data while changing a plurality of parameters of the recognition program. The result display portion 112b (the control portion 11) displays the result of evaluation of the recognition performance of the recognition program for every combination of the plurality of changed parameters. Thus, the user is notified of the result of evaluation of the recognition program for every combination of the parameters. This ensures that based on the results of evaluations of the combinations of the parameters, the user selects a combination of the parameters of the recognition program. This, in turn, facilitates adjustment of the recognition program.
The advantageous effects of the second embodiment are otherwise similar to the advantageous effects the first embodiment.
In the first and second embodiments, the PC (recognition program evaluation device) has been illustrated as generating imaginary scene data that indicates workpieces in a randomly stacked state from data of a single workpiece. The PC may otherwise acquire previously generated imaginary scene data that indicates workpieces in a randomly stacked state.
Also in the first and second embodiments, the robot arm of the robot has been illustrated as having six degrees of freedom. The robot arm may otherwise have other than six degrees of freedom (such as five degrees of freedom and seven degrees of freedom).
Also in the first and second embodiments, the PC (recognition program evaluation device) has been illustrated as evaluating the recognition program to recognize the positions of randomly stacked workpieces so that the robot grips the randomly stacked workpieces. It is also possible to evaluate other recognition programs than the recognition program associated with the robot's gripping of the workpieces. For example, it is possible to evaluate a recognition program to recognize the state of the workpieces after being subjected to work.
Also in the first and second embodiments, a plurality of evaluation standards are used in the evaluation. It is also possible to use, for example, a single evaluation standard. It is also possible to use in the evaluation other evaluation standards than success ratio, reproductivity ratio, robustness, interference, and accuracy.
Also in the first and second embodiments, for the sake of description, the processing by the control portion has been illustrated as using a flow-driven flow, in which the processing is executed in an order of processing flow. The processing operation of the control portion may otherwise be, for example, event-driven processing, which is executed on an event basis. In this case, the processing may be complete event-driven processing or may be a combination of the event-driven processing and the flow-driven processing.
Obviously, numerous modifications and error of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the present disclosure may be practiced otherwise than as specifically described herein.
Number | Date | Country | Kind |
---|---|---|---|
2013-004888 | Jan 2013 | JP | national |