The present disclosure relates to a robot system and a robot dialogue method provided with a function of communicating between robots.
Conventionally, dialogue robots which dialogue with a human are known. Patent Documents 1 and 2 disclose this kind of dialogue robots.
Patent Document 1 discloses a dialogue system provided with a plurality of dialogue robots. Utterance operation (language operation) and behavior (non-language operation) of each dialogue robot are controlled according to preset script. While the plurality of robots have a conversation according to the script, the robot occasionally asks a human who is a participant for something (asks the human a question or demands an agreement) to give the human who participates in the conversation with the plurality of robots a conversation feel equivalent to holding when having a conversation with other humans.
Moreover, Patent Document 2 discloses a dialogue system provided with a dialogue robot and a life support robot system. The life support robot system autonomously controls appliances (electrical household appliances provided with a communication function) which provide services for supporting human life. The dialogue robot acquires a user's living environment and information on his/her activities, and analyzes a user's situation, decides a service to be provided to the user based on the situation. Then, the dialogue robot asks with a voice message the user for making the user recognize the service to be provided. Then, when the dialogue robot determines based on a reply of the user that an attraction of the user into the conversation with the robot is successful, the robot transmits an execution demand of the service to the life robot support system or the appliance(s).
[Reference Document of Conventional Art]
[Patent Document 1] JP2016-133557A
[Patent Document 2] WO2005/086051A1
The present inventors have examined a robot system which causes the dialogue robots described above to function as an interface between a work robot and a customer. In this robot system, the dialogue robot receives a request of a work from the customer, and a work robot performs the received request.
However, in the robot system described above, it is also assumed that the conversation feel established by the dialogue robot with the customer when receiving the request (i.e., a feel of the customer as participating in the conversation with the dialogue robot) may be spoiled, while the work robot performs the work, thereby boring the customer.
The present disclosure is made in view of the above situations, and it proposes a robot system and a robot dialogue method, which, also while a work robot performs a work requested from a human, produce in the human a feel of participating in the conversation with a dialogue robot and a work robot, and maintain the feel.
A robot system according to one aspect of the present disclosure includes a work robot having a robotic arm and an end effector attached to a hand part of the robotic arm, and configured to perform a work using the end effector based on a request of a human, a dialogue robot having a language operation part and a non-language operation part, and configured to perform a language operation and a non-language operation toward the work robot and the human, and a communication device configured to communicate information between the dialogue robot and the work robot. The work robot includes a progress status reporting module configured to transmit to the dialogue robot, during the work, progress status information including operation process identification information that identifies currently-performed operation process, and a degree of progress of the operation process. The dialogue robot includes an utterance material database storing the operation process identification information, and utterance material data corresponding to the operation process so as to be associated with each other, and a language operation controlling module configured to read from the utterance material database the utterance material data corresponding to the received progress status information, generates robot utterance data based on the read utterance material data and the degree of progress, and outputs the generated robot utterance data to the language operation part.
Moreover, a robot dialogue method according to another aspect of the present disclosure is a robot dialogue method performed by a work robot and a dialogue robot, the work robot including a robotic arm and an end effector attached to a hand part of the robotic arm, and configured to perform a work using the end effector based on a request of a human, and the dialogue robot having a language operation part and a non-language operation part, and configured to perform the language operation and the non-language operation toward the work robot and the human. The method includes causing the work robot to transmit to the dialogue robot, during the work, progress status information including operation process identification information that identifies a currently-performed operation process, and a degree of progress of the operation process, causing the dialogue robot to read utterance material data corresponding to the received progress status information from the utterance material database, the utterance material database storing the operation process identification information and the utterance material data corresponding to the operation process, so as to be associated with each other, and causing the dialogue robot to generate robot utterance data based on the read utterance material data and the degree of progress, and output the generated robot utterance data from the language operation part.
According to the robot system with the described configuration, while the work robot performs the work requested from the human, the dialogue robot performs the language operation toward the human and the work robot with the contents of utterance corresponding to the currently-performed operation process. That is, also while the work robot performs the work, the utterance (language operation) of the dialogue robot continues, and the utterance corresponds to the content and the situation of the work of the work robot. Therefore, during the work of the work robot, the human can feel like participating in the conversation with the dialogue robot and the work robot, and maintain the feel.
According to the present disclosure, the robot system can be realized, which, also while the work robot performs the work requested from the human, produces in the human the feel of participating in the conversation with the dialogue robot and the work robot, and maintains the feel.
Next, one embodiment of the present disclosure will be described with reference to the drawings.
The dialogue robot 2 according to this embodiment is a humanoid robot for having a conversation with the human 10. However, the dialogue robot 2 is not limited to the humanoid robot, or may be a personified animal type robot and, thus, the appearance should not be limited.
The dialogue robot 2 includes a torso part 21, a head part 22 provided to an upper part of the torso part 21, left and right arm parts 23L and 23R provided to side parts of the torso part 21, and a traveling unit 24 provided to a lower part of the torso part 21. The head part 22, the arm parts 23L and 23R, and the traveling unit 24 of the dialogue robot 2 function as a “non-language operation part” of the dialogue robot 2. Note that, the non-language operation part of the dialogue robot 2 is not limited to the above configuration, and, for example, in a dialogue robot which can expose expression by eyes, nose, eyelids, etc., these expression forming elements also correspond to the non-language operation part.
The head part 22 is connected with the torso part 21 through a neck joint so as to be rotatable and bendable. The arm parts 23L and 23R are rotatably connected to the torso part 21 through shoulder joints. Each of the arm parts 23L and 23R has an upper arm, a lower arm, and a hand. The upper arm and the lower arm are connected with each other through an elbow joint, and the lower arm and the hand are connected with each other through a wrist joint. The dialogue robot 2 includes a head actuator 32 for operating the head part 22, and an arm actuator 33 for operating the arm parts 23L and 23R, and a traveling actuator 34 for operating the traveling unit 24 (see
The dialogue robot 2 includes a camera 68, a microphone 67 and a speaker 66 built inside the head part 22, and the display unit 69 attached to the torso part 21. The speaker 66 and the display unit 69 function as a “language operation part” of the dialogue robot 2.
The controller 25 which governs the language operation and the non-language operation of the dialogue robot 2 is accommodated in the torso part 21 of the dialogue robot 2. Note that the “language operation” of the dialogue robot 2 means a communication transmitting operation by operation of the language operation part of the dialogue robot 2 (i.e., sound emitted from the speaker 66, or character(s) displayed on the display unit 69). Moreover, the “non-language operation” of the dialogue robot 2 means a communication transmitting operation by operation of the non-language operation part of the dialogue robot 2 (i.e., a change in the appearance of the dialogue robot 2 by operation of the head part 22, the arm parts 23L and 23R, and the traveling unit 24).
The controller 25 functions as an image recognizing module 250, a voice recognizing module 251, a language operation controlling module 252, a non-language operation controlling module 253, and a work robot managing module 254. These functions are realized by the arithmetic processing unit 61 reading and executing software, such as the program stored in the storage device 62. Note that the controller 25 may execute each processing by a centralized control of a sole computer, or may execute each processing by a distributed control of a plurality of collaborating computers. Moreover, the controller 25 may be comprised of a microcontroller, a programmable logic controller (PLC), etc.
The image recognizing module 250 detects the existence of the human 10 by acquiring an image (video) captured by the camera 68 and carrying out image processing. The image recognizing module 250 also acquires the image (video) captured by the camera 68, analyzes movement of the human 10, behavior, expression, etc. of the human 10, and generates human movement data.
The voice recognizing module 251 picks up sound or voice uttered by the human 10 with the microphone 67, recognizes the content of the voice data, and generates human utterance data.
The language operation controlling module 252 analyzes a situation of the human 10 based on the script data stored beforehand, the human movement data, the human utterance data, etc., and generates the robot utterance data based on the analyzed situation. The language operation controlling module 252 outputs the generated robot utterance data to the language operation part of the dialogue robot 2 (the speaker 66, or the speaker 66 and the display unit 69). Thus, the dialogue robot 2 performs the language operation.
In the above, when the language operation controlling module 252 analyzes the situation of the human 10, the language operation controlling module 252 may associate human movement data and human utterance data, with a human situation, and store it in human situation database 651 beforehand, so as to analyze the situation of the human 10 using the information accumulated in the human situation database 651. Moreover, in the above, when the language operation controlling module 252 generates the robot utterance data, the script data, the human situation, and the robot utterance data may be stored in robot utterance database 652 beforehand so as to be associated with each other, and the robot utterance data may be generated using the information accumulated in the robot utterance database 652.
The language operation controlling module 252 also receives progress status information (described later) from the work robot 4, generates the robot utterance data, and outputs the robot utterance data to the language operation part of the dialogue robot 2 (the speaker 66, or the speaker 66 and the display unit 69). Thus, the dialogue robot 2 performs the language operation.
In the above, the progress status information includes operation process identification information for identifying the operation process which the work robot 4 is currently performing, and a degree of progress of the operation process. When the language operation controlling module 252 generates the robot utterance data, the operation process identification information, its operation process, and corresponding utterance material data are stored in utterance material database 653 beforehand so as to be associated with each other, and the utterance material data corresponding to the progress status information received is read from the utterance material database 653. Then, the language operation controlling module 252 generates the robot utterance data based on the read utterance material data and the received degree of progress.
When the dialogue robot 2 performs the language operation, the non-language operation controlling module 253 generates the robot operation data so as to perform the non-language operation corresponding to the language operation. The non-language operation controlling module 253 outputs the generated robot operation data to the drive control device 70, and, thereby, the dialogue robot 2 performs the non-language operation based on the robot operation data.
The non-language operation corresponding to the language operation is behavior of the dialogue robot 2 corresponding to the content of the language operation of the dialogue robot 2. For example, when the dialogue robot 2 pronounces the name of an object, pointing to the object by the arm parts 23L and 23R, or turning the head part 22 to the object corresponds to the non-language operation. Moreover, for example, when the dialogue robot 2 pronounces gratitude, uniting both hands or hanging down the head part 22 corresponds to the non-language operation.
In the above, when the non-language operation controlling module 253 generates the robot operation data, the robot utterance data, and the robot operation data for causing the dialogue robot 2 to perform the non-language operation corresponding to the language operation caused by the robot utterance data may be stored beforehand in robot operation database 654 so as to be associated with each other. The robot operation data corresponding to the robot utterance data may be read from the information accumulated in the robot operation database 654 to generate the robot utterance data.
The work robot managing module 254 transmits a processing start signal to the work robot 4 according to the script data stored beforehand. Moreover, the work robot managing module 254 transmits a progress check signal (described later) to the work robot 4 at an arbitrary timing between the transmission of the processing start signal to the work robot 4 and a reception of an end signal of the processing from the work robot 4.
[Work Robot 4]
The work robot 4 includes at least one articulated robotic arm 41, an end effector 42 which performs a work by being attached to the hand part of the robotic arm 41, and a controller 45 which governs operations of the robotic arm 41 and the end effector 42. The work robot 4 according to this embodiment is a dual-arm robot having two robotic arms 41 which collaboratively perform a work. However, the work robot 4 is not limited to this embodiment, and it may be a single-arm robot having one robotic arm 41, or may be a multi-arm robot having a plurality of, three or more robotic arms 41.
The robotic arm 41 is a horizontal articulated robotic arm, and has a plurality of links connected in series through joints. However, the robotic arm 41 is not limited to this embodiment, and may be of a vertical articulated type.
The robotic arm 41 has an arm actuator 44 for operating the robotic arm 41 (see
The end effector 42 attached to the hand part of the robotic arm 41 may be selected according to the content of the work performed by the work robot 4. Moreover, the work robot 4 may replace the end effector 42 with another one for every process of the work.
The controller 45 functions as an arm controlling module 451, an end effector controlling module 452, a progress status reporting module 453, etc. These functions are realized by the arithmetic processing unit 81 reading and executing software, such as the program stored in the storage device 82, according to the script data stored beforehand Note that the controller 45 may execute each processing by a centralized control of a sole computer, or may execute each processing by a distributed control of a plurality of collaborating computers. Moreover, the controller 45 may be comprised of a microcontroller, a programmable logic controller (PLC), etc.
The arm controlling module 451 operates the robotic arm 41 based on teaching data stored beforehand. Specifically, the arm controlling module 451 generates a positional command based on the teaching data, and detection information from various sensors provided to the arm actuator 44, and outputs it to the driver 90. The driver 90 operates each actuator included in the arm actuator 44 according to the positional command.
The end effector controlling module 452 operates the end effector 42 based on operation data stored beforehand. The end effector 42 is comprised of, for example, at least one actuator among an electric motor, an air cylinder, an electromagnetic valve, etc., and the end effector controlling module 452 operates the actuator(s) according to the operation of the robotic arm 41.
The progress status reporting module 453 generates the progress status information during the work of the work robot 4, and transmits it to the dialogue robot 2. The progress status information includes at least the operation process identification information for identifying the currently-performed operation process, and the degree of progress of the operation process, such as abnormal or normal of the processing and the progress. Note that the generation and the transmission of the progress status information may be performed to a given timing, such as a timing at which the progress check signal (described later) is acquired from the dialogue robot 2, a timing of a start and an end of each operation process included in the processing, etc.
Here, one example of operation of the robot system 1 of the above configuration is described. In this example, the work robot 4 performs a work of pasting a protection film on a liquid crystal display part of the smartphone (a tablet type communication terminal). However, the content of the work performed by the work robot 4, and the contents of the language operation and the non-language operation of the dialogue robot 2 are not limited to this example.
When the human 10 visits the booth 92, the dialogue robot 2 performs the language operation (utterance) “Welcome. Please sit on the chair.” and the non-language operation (gesture) toward the human 10 to urge the human 10 to sit (Step S12).
When the dialogue robot 2 detects that the human 10 takes the seat based on the captured image (Step S13), it performs the language operation “In this booth, a service for pasting a protection sticker on your smartphone is provided.” and the non-language operation toward the human 10, to explain to the human 10 the content of the work to be performed by the work robot 4 (Step S14).
When the dialogue robot 2 analyzes voice of the human 10 and the captured image, and detects the intension of the human 10 of a request of the work (Step S15), it performs the language operation “Alright then, please place your smartphone on the workbench.” and the non-language operation toward the human 10, to urge the human 10 to place the smartphone on the workbench 94 (Step S16).
Further, the dialogue robot 2 transmits the processing start signal toward the work robot 4 (Step S17). When the dialogue robot 2 transmits the processing start signal, it performs toward the work robot 4 the language operation “Mr. Robot, please begin the preparation.” and performs the non-language operation in which the dialogue robot 2 turns the face toward the work robot 4, shakes the hand(s) to urge the start of processing, etc. (Step S18).
The work robot 4 which acquired the processing start signal (Step S41) starts the film pasting processing (Step S42).
The work robot 4 positions the smartphone on the workbench 94 (Step S24), wipes a display part of the smartphone (Step S25), and extracts a protection film from the film holder (Step S26), peels pasteboard from the protection film (Step S27), positions the protection film on the display part of the smartphone (Step S28), places the protection film on the display part of the smartphone (Step S29), and wipes the protection film (Step S30).
In the film pasting processing, the work robot 4 performs a series of processes at Steps S21-S30. After the film pasting processing is finished, the work robot 4 transmits the processing end signal to the dialogue robot 2 (Step S43).
The dialogue robot 2 transmits the progress check signal to the work robot 4 at an arbitrary timing, while the work robot 4 performs the film pasting processing. For example, the dialogue robots 2 may transmit the progress check signal to the work robot 4 at given time intervals, such as for every 30 seconds. The work robot 4 transmits the progress status information to the dialogue robot 2, using that the progress check signal is acquired as a trigger. Note that the work robot 4 may transmit the progress status information to the dialogue robot 2 at the timing of the start and/or the end of the operation process, regardless of the existence of the progress check signal from the dialogue robot 2.
When the progress status information is received from the work robot 4, the dialogue robot 2 performs the language operation and the non-language operation corresponding to the operation process currently performed by the work robot 4. Note that the dialogue robot 2 may determine not to perform the language operation and the non-language operation based on the content of the progress status information, the timings and intervals of the last language operation and non-language operation, the situation of the human 10, etc.
For example, at a timing where the work robot 4 finished the selection process of the protection film (Step S23), the dialogue robot 2 performs the language operation “Alright then, Mr. Robot. Please start.” and the non-language operation toward the work robot 4.
Moreover, for example, at an arbitrary timing while the work robot 4 performs the processes between the positioning process of the smartphone (Step S24) and the peeling process of the pasteboard (Step S27), the dialogue robot 2 performs the language operation “It is exciting if he can paste the film well.” and the non-language operation toward the human 10. Further, when the dialogue robot 2 asks the question to the human 10 and the human 10 then answers to the question, the dialogue robot 2 may answer to the utterance of the human 10.
Moreover, for example at a timing where the work robot 4 performs the protection film wiping process (Step S30), the dialogue robot 2 performs the language operation “It will be done soon.” and the non-language operation toward the work robot 4 and/or the human 10.
As described above, while the work robot 4 performs the work silently, the dialogue robot 2 speaks to the work robot 4 or has a conversation with the human 10. Therefore, the human 10 is not bored during the work of the work robot 4. Moreover, since the dialogue robot 2 utters and gestures toward the work robot 4 which performs the work, the work robot 4 joins the dialogue members who were only the dialogue robot 2 and the human 10 when the human 10 visited.
As described above, the robot system 1 of this embodiment includes the work robot 4 which has the robotic arm 41 and the end effector 42 attached to the hand part of the robotic arm 41, and performs the work using the end effector 42 based on the request of the human 10, the dialogue robot 2 which has the language operation part and the non-language operation part, and performs the language operation and the non-language operation toward the work robot 4 and the human 10, and the communication device 5 which communicates the information between the dialogue robot 2 and the work robot 4. Then, the work robot 4 has the progress status reporting module 453 which transmits to the dialogue robot 2, during the work, the progress status information including the operation process identification information for identifying the currently-performed operation process, and the degree of progress of the operation process. Moreover, the dialogue robot 2 includes the utterance material database 653 which stores the operation process identification information and the utterance material data corresponding to the operation process so as to be associated with each other, and the language operation controlling module 252 which reads the utterance material data corresponding to the received progress status information from the utterance material database 653, generates the robot utterance data based on the read utterance material data and the degree of progress, and outputs the generated robot utterance data to the language operation part.
Moreover, the robot dialogue method of this embodiment is performed by the work robot 4 which includes the robotic arm 41 and the end effector 42 attached to the hand part of the robotic arm 41, and performs the work using the end effector 42 based on the request of the human 10, and the dialogue robot 2 which includes the language operation part and the non-language operation part, and performs the language operation and the non-language operation toward the work robot 4 and the human 10. In this robot dialogue method, the work robot 4 transmits to the dialogue robot 2, during the work, the progress status information including the operation process identification information for identifying the currently-performed operation process, and the degree of progress of the operation process, and the dialogue robot 2 reads the utterance material data corresponding to the received progress status information from the utterance material database 653 which stores the operation process identification information and the utterance material data corresponding to the operation process so as to be associated with each other, generates the robot utterance data based on the read utterance material data and degree of progress, and outputs the generated robot utterance data from the language operation part.
In the above, the dialogue robot 2 may include the work robot managing module 254 which transmits, during the work of the work robot 4, the progress check signal to the work robot 4, and the work robot 4 may transmit the progress status information to the dialogue robot 2, using that the progress check signal is received as the trigger.
Alternatively, in the above, the work robot 4 may transmit the progress status information to the dialogue robot 2 at the timing of the start and/or the end of the operation process.
According to the robot system 1 and the robot dialogue method described above, while the work robot 4 performs the work requested from the human 10, the dialogue robot 2 performs the language operation toward the human and the work robot with the contents of utterance corresponding to the currently-performed operation process. That is, also while the work robot 4 performs the work, the utterance (language operation) of the dialogue robot 2 continues, and the utterance corresponds to the content and the situation of the work of the work robot 4. Therefore, during the work of the work robot 4, the human 10 can feel like participating in the conversation with the dialogue robot 2 and the work robot 4 (conversation feel), and maintain the feel.
Moreover, in the robot system 1 according to this embodiment, the dialogue robot 2 described above includes the robot operation database 654 which stores the robot utterance data, and the robot operation data for causing the dialogue robot to perform the non-language operation corresponding to the language operation caused by the robot utterance data, so as to be associated with each other, and the non-language operation controlling module 253 which reads the robot operation data corresponding to the generated robot utterance data from the robot operation database 654, and outputs the read robot operation data to the non-language operation part.
Similarly, in the robot dialogue method according to this embodiment, the dialogue robot 2 outputs from the non-language operation part the robot operation data for causing the dialogue robot 2 to perform the non-language operation corresponding to the language operation caused by the generated robot utterance data.
Thus, the dialogue robot 2 performs the non-language operation (i.e., behavior) corresponding to the language operation in association with the language operation. The human 10 who looked at the non-language operation of the dialogue robot 2 can feel the conversation feel with the robots 2 and 4, which is deeper than the case where the dialogue robot 2 only performs the language operation.
Moreover, in the robot system 1 and the robot dialogue method according to this embodiment, the dialogue robot 2 has the conversation with the human 10 according to given script data by performing the language operation and the non-language operation toward the human 10, analyzes the content of the conversation to acquire the request from the human 10, transmits the processing start signal of the work to the work robot 4 based on the request, and performs the language operation and the non-language operation toward the work robot 4.
Thus, since the dialogue robot 2 accepts the work to be performed by the work robot 4, the human 10 can have the feel of participating in the conversation with the dialogue robot 2 from the stage before the work robot 4 performs the work. When the dialogue robot 2 transmits the processing start signal to the work robot 4, since the dialogue robot 2 performs the language operation and the non-language operation to the work robot 4, the human 10 who is looking at the operations can have the feel that the work robot 4 joined the earlier conversation with the dialogue robot 2.
Although the suitable embodiment of the present disclosure is described above, what changed the details of the concrete structures and/or functions of the above embodiment may be encompassed by the present disclosure, without departing from the spirit of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2017-019832 | Feb 2017 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/003848 | 2/5/2018 | WO | 00 |