This application is based upon and claims the benefit of priority from Japanese patent application No. 2018-162774, filed on Aug. 31, 2018, the disclosure of which is incorporated herein in its entirety by reference.
The present disclosure relates to a voice interaction system, a voice interaction method, a program, a learning model generation apparatus, and a learning model generation method, and in particular, to a voice interaction system, a voice interaction method, a program, a learning model generation apparatus, and a learning model generation method for having a conversation with a user by using a voice.
A technique for enabling a user to enjoy a daily conversation with a voice interaction robot (voice interaction system) is becoming widespread. A voice interaction robot according to this technique analyzes phonological information of a voice uttered by a user and makes a response according to a result of the analysis. Here, the voice interaction robot determines a response using a learning model.
Regarding the above technique, Japanese Unexamined Patent Application Publication No. 2005-352154 discloses an emotional state reaction operation apparatus which evaluates an emotional state of a user from a voice uttered by the user and executes an appropriate corresponding operation. The emotional state reaction operation apparatus according to Japanese Unexamined Patent Application Publication No. 2005-352154 includes a phoneme feature quantity extraction function for extracting a feature quantity related to a phoneme spectrum of voice information, a state determination function for inputting the phoneme feature quantity and determining an emotional state of the voice information based on a state determination table prepared in advance, and a corresponding action selection function for inputting the emotional state and determining a corresponding action process based on a corresponding action selection table prepared in advance. The emotional state reaction motion apparatus according to Japanese Unexamined Patent Application Publication No. 2005-352154 further includes an emotional state learning table and emotional state learning function. The emotional state learning function acquires a relation between the phoneme feature quantity and the emotional state using a predetermined machine learning model based on the emotional state learning table and stores a result of the learning in the state determination table. The state determination function determines an emotional state according to the machine learning model based on the state determination table.
The machine learning model may not be appropriate depending on a state of a user (a difference in the user, an emotion of the user, etc.). In this case, for example, a response error such as overlapping between user speech and apparatus speech, or a long silence in which a period between user speech and apparatus speech is long may occur. To address this issue, the technique according to Japanese Unexamined Patent Application Publication No. 2005-352154 determines the corresponding action process using one machine learning model. For this reason, with the technique according to Japanese Unexamined Patent Application Publication No. 2005-352154, it is difficult to appropriately handle a situation to effectively prevent a response error caused by an inappropriate learning model from occurring.
The present disclosure relates to a voice interaction system, a voice interaction method, a program, a learning model generation apparatus, and a learning model generation method capable of appropriately handling a situation so as to effectively prevent a response error from occurring.
A voice interaction system according to the present disclosure is a voice interaction system that has a conversation with a user by using a voice, including: a speech acquisition unit configured to acquire user speech given by the user; a feature extraction unit configured to extract at least a feature of the acquired user speech; a response determination unit configured to determine a response in accordance with the extracted feature using any one of a plurality of learning models generated in advance by machine learning; a response execution unit configured to perform control in order to execute the determined response; a user state detection unit configured to detect a user state, which is a state of the user; and a learning model selection unit configured to select a learning model from the plurality of learning models in accordance with the detected user state, in which the response determination unit determines the response using the learning model selected by the learning model selection unit.
Further, a voice interaction method according to the present disclosure is a voice interaction method performed by a voice interaction system that has a conversation with a user by using a voice, the voice interaction method including: acquiring user speech given by the user; extracting at least a feature of the acquired user speech; determining a response in accordance with the extracted feature using any one of a plurality of learning models generated in advance by machine learning; performing control in order to execute the determined response; detecting a user state, which is a state of the user; and selecting a learning model from the plurality of learning models in accordance with the detected user state, in which the response is determined using the selected learning model.
Further, a program according, to the present disclosure is a program for executing a voice interaction method performed by a voice interaction system that has a conversation with a user by using a voice, the program causing a computer to execute the steps of: acquiring user speech given by the user; extracting at least a feature of the acquired user speech; determining a response in accordance with the extracted feature using any one of a plurality of learning models generated in advance by machine learning; performing control in order to execute the determined response; detecting a user state, which is a state of the user; selecting a learning model from the plurality of learning models in accordance with the detected user state; and determining the response using the selected learning model.
The cause for generating a response error is often, an inappropriate learning model. The above-described configuration of the present disclosure makes it possible to switch the learning model to an appropriate one in accordance with the user state when the learning model is not appropriate. Therefore, the present disclosure can appropriately handle a situation so as to effectively prevent a response error from occurring.
Further, the user state detection unit detects a degree of activeness of the user in the conversation as the user state, and the learning model selection unit selects the learning model that corresponds to the degree of the activeness of the user.
The above-described configuration of the present disclosure makes it possible to have a conversation using the learning model that is suitable for the degree of the activeness of the user in the conversation, thereby executing a response in accordance with the activeness of the user who has a conversation.
Further, the user state detection unit detects the amount of speech given by the user in a predetermined period or the percentage of the time during which the user has made a speech with respect to the sum of the time during which the voice interaction system has output a voice as a response and the time during which the user has made a speech in the predetermined period, and the learning model selection unit selects the learning model that corresponds to the amount of speech given by the user or the percentage of the time during which the user has made a speech.
The above-described configuration of the present disclosure makes it possible to determine the degree of the activeness of the user more accurately.
Further, the user state detection unit detects identification information on the user as the user state, and the learning model selection unit selects the learning model that corresponds to the identification information on the user.
The above-described configuration of the present disclosure makes it possible to have a conversation using the learning model that is suitable for the user, thereby executing a response in accordance with the user who has a conversation.
Further, the user state detection unit detects emotion of the user as the user state, and the learning model selection unit selects the learning model that corresponds to the emotion of the user.
The above-described configuration of the present disclosure makes it possible to have a conversation using the learning model that is suitable for the degree of user's emotion in the conversation, thereby making a response in accordance with the emotion of the user who has a conversation.
Further, the user state detection unit detects a health condition of the user as the user state, and the learning model selection unit selects the learning model that corresponds to the health condition of the user.
The above-described configuration of the present disclosure makes it possible to have a conversation using a learning model that is suitable for the degree of the user's health condition, thereby executing a response in accordance with the health condition of the user who has a conversation.
Further, the user state detection unit detects a degree of an awakening state of the user as the user state, and the learning model selection unit selects the learning model that corresponds to the degree of the awakening state of the user.
The above-described configuration of the present disclosure makes it possible to have a conversation using the learning model adapted for the degree of the user's awakening state, thereby making a response in accordance with the awakening state of the user who has a conversation.
Further, a learning model generation apparatus according to the present disclosure is a learning model generation apparatus configured to generate a learning model used in a voice interaction system that has a conversation with a user by using a voice, the apparatus including: a speech acquisition unit configured to acquire user speech, which is speech given by at least one desired user, by having a conversation with the desired user; a feature extraction unit configured to extract a feature vector indicating at least a feature of the acquired user speech; a sample data generation unit configured to generate sample data in which a correct label indicating a response to the user speech and the feature vector are associated with each other: a user state acquisition unit configured to acquire a user state, which is a state of the desired user when the user has made a speech, to associate the acquired user state with the sample data that corresponds to the user speech; a sample data classification unit configured to classify the sample data for each of the user states; and a learning model generation unit configured to generate a plurality of learning models by machine learning for each of pieces of the classified sample data.
Further, a learning model generation method according to the present disclosure is a learning model generation method for generating a learning model used in a voice interaction system that has a conversation, with a user by using a voice, the method including: acquiring user speech, which is speech given by at least one desired user, by having a conversation with the desired user; extracting a feature vector indicating at least a feature of the acquired user speech; generating sample data in which a correct label indicating a response to the user speech and the feature vector are associated with each other; acquiring a user state, which is a state of the desired user when the user has made a speech, to associate the acquired user state with the sample data that corresponds to the user speech; classifying the sample data for each of the user states; and generating a plurality of learning models by machine learning for each of pieces of the classified sample data.
The present disclosure is able to classify the sample data for each user state and generate a plurality of learning models by machine learning, thereby generating a plurality of learning models that correspond to the user state. Therefore, the voice interaction system is able to select the learning, model in accordance with the user state.
According to the present disclosure, it is possible to provide a voice interaction system, a voice interaction method, a program, a learning model generation apparatus, and a learning model generation method capable of appropriately handling a situation so as to effectively prevent a response error from occurring.
The above and other objects, features and advantages of the present disclosure will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus are not to be considered as limiting the present disclosure.
Hereinafter, embodiments according to the present disclosure are explained with reference to the drawings. Note that the same symbols are assigned to the same components throughout the drawings, and repeated explanations are omitted as required.
The voice interaction system 1 includes a microphone 2 that collects surrounding sounds, a speaker 4 that produces a voice, a detection device 6 used to detect a user's state, a manipulator 8 that operates a neck and the like of the robot, and a control device 10. The control device 10 has, for example, a function as a computer. The control device 10 is connected to the microphone 2, the speaker 4, the detection device 6, and the manipulator 8 wirelessly or through a wire. The detection device 6 includes, for example, at least one of a camera and a biometric sensor. The biometric sensor is, for example, a sphygmomanometer, a thermometer, or a pulsimeter.
The control device 10 includes, as main hardware components, a CPU (Central Processing Unit) 12, a ROM (Read Only Memory) 14, a RAM (Random Access Memory) 16, and an interface (IF) unit 18. The CPU 12, the ROM 14, the RAM 16, and the interface unit 18 are connected to each other through a data bus or the like.
The CPU 12 has a function as an arithmetic unit that performs a control process, an arithmetic process, and the like. The ROM 14 has a function of storing a control program, an arithmetic program, and the like executed by the CPU 12. The RAM 16 has a function of temporarily storing processing data and the like. The interface unit 18 inputs and outputs signals to and from the outside wirelessly through a wire. Further, the interface unit 18 accepts an operation of inputting data by the user and displays information for the user.
The control device 10 analyzes user speech collected by the microphone 2, determines a response to the user according to the user speech, and executes it. Here, in this embodiment, the “response” includes “silent”, “nod”, and “speak”. The “silent” is an action in which the voice interaction system 1 does nothing. The “nod” is an action of vertically swinging the neck part of the robot. The “speak” Is an action in which the voice interaction system 1 outputs a voice. When the determined response is “nod”, the control device 10 controls the manipulator 8 to operate the neck part of the robot<When the determined response is “speak”, the control device 10 outputs a voice (system speech) corresponding to the generated response through the speaker 4.
Each of the components shown in
The speech acquisition unit 102 may include the microphone 2. The speech acquisition unit 102 acquires user speech (and speech of the voice interaction system 1). Specifically, the speech acquisition unit 102 collects user speech (and speech of the voice interaction system 1) and converts it into a digital signal. Then, the speech acquisition unit 102 outputs voice data of the user speech (user voice data) to the feature extraction unit 104.
The feature extraction unit 104 extracts at least features of the user speech. Specifically, the feature extraction unit 104 analyzes, for the user speech, non-linguistic information, which is different from linguistic information indicating a specific semantic content of the user speech. Further, the feature extraction unit 104 generates a feature vector, which will be described later, as a non-linguistic information analysis result that is a result of analyzing the non-linguistic information. Then, the feature extraction unit 104 outputs the non-linguistic information analysis result (the feature vector) to the response determination unit 120. The feature extraction unit 104 may extract user's features other than the user speech and generate a feature vector.
Note that the non-linguistic information is information that is different from the linguistic information (the character string) of user speech to be processed and includes at least one of prosodic information on the user speech and response history information. The prosodic information is information indicating features of a voice waveform of user speech such as a fundamental frequency, a sound pressure, a variation in frequency or the like, a band of variations, a maximum amplitude, an average amplitude, and so on. Further, the response history information is information indicating a past history of responses determined (generated) by the response determination unit 120 and executed by the response execution unit 130. The response history storage unit 132 stores (updates) this response history information when a response is executed by the response execution unit 130.
Specifically, the feature extraction unit 104 analyzes prosodic information based on the voice waveform by performing a voice analysis or the like for the user voice data acquired by the speech acquisition unit 102. Then, the feature extraction unit 104 calculates a value indicating a feature quantity indicating the prosodic information. Note that the feature extraction unit 104 may calculate, for the user voice data, a fundamental frequency or the like for each of frames that are obtained by dividing the user voice data, for example, at the interval of 32 msec. Further, the feature extraction unit 104 extracts response history information from the response history storage unit 132 and calculates a feature quantity indicating a feature of the response history.
Note that since the syntactic analysis using the linguistic information of the user speech uses pattern recognition or the like, it often requires a very long time to do this analysis. In contrast to this, the amount of data used for the analysis of the non-linguistic information (i.e., the analysis of the prosodic information and the analysis of the response history information) is smaller than that for the syntactic analysis and its calculation technique is simpler than that for the syntactic analysis. Therefore, the time required for the analysis of the non-linguistic information may be much shorter than the time required for the syntactic analysis.
The selected model storage unit 108 stores a learning model selected by the learning model selection, unit 150, which will be described later. Here, in this embodiment, the learning model selection unit 150 selects an appropriate learning model from a plurality of learning models stored in the learning model database 160 by a method described later. When the learning model selection unit 150 has not selected a learning model, such as before a voice interaction starts, the selected model storage unit 108 may store one specified learning model.
The response database 110 stores data necessary for the voice interaction system 1 to make a response. For example, the response database 110 stores in advance a plurality of system voice data pieces indicating system speech when the response is “speak”.
The response determination unit 120 determines which response is to be executed according to the non-linguistic information analysis result (the feature vector). Here, in this embodiment, the response determination unit 120 determines a response according to the extracted feature (the feature vector) using one of the plurality of learning models generated by machine learning in advance such as supervised learning. Details thereof will be described later.
In this embodiment, the response determination unit 120 determines one of “silent”, “nod”, and “speak” as a response. The response determination unit 120 outputs data (response data) indicating the determined response to the response execution unit 130. When the response determination unit 120 determines to “speak” as a response, it may sequentially or randomly select the system speech (the system voice data) from the plurality of system speech stored in the response database 110. The response determination unit 120 outputs the selected system voice data to the response execution unit 130.
The response execution unit 130 performs control for executing the response determined by the response determination unit 120. Specifically, when the response data output from the response determination unit 120 indicates “silent (silent response)”, the response execution unit 130 controls the speaker 4 and the manipulator 8 so that they do not operate. When the response data output from the response determination unit 120 indicates “nod (nod response)”, the response execution unit 130 controls the manipulator 8 to operate the neck part of the robot. When the response data output from the response determination unit 120 indicates “speak (speech response)”, the response execution unit 130 controls the speaker 4 to output a voice indicating the system voice data selected by the response determination unit 120.
The response history storage unit 132 stores data for identifying the response executed by the response execution unit 130 as response history information. Further, when the response history storage unit 132 includes the time related to a conversation as the response history information, the response history storage unit 132 may measure a period of time during which the conversation takes place and store the measured time as the response history information.
In the example shown in
Further, vim represents a type of an immediately-preceding response. The type of the immediately-preceding response is a type of an immediately-preceding response executed by the response execution unit 130 (just before the i-th user speech) (and is one of “silent”, “nod”, and “speak”). Note that for each of component values (feature quantities) of components that are not numerical values such as vim, a numerical value is assigned for each type. For example, for vim, a component value “1” indicates “silent”, a component value “2” indicates “nod”, and a component value “3” indicates “speak”.
The user state detection unit 140 (
The learning model selection unit 150 selects a learning model from the plurality of learning models stored in the learning model database 160 according to the user state detected by the user state detection unit 140. Details thereof will be described later. The learning model database 160 stores the plurality of learning models generated in advance by machine learning. A specific example of a method of generating the plurality of learning models will be described later.
The cause for generating a response error is often an inappropriate learning model. For example, an appropriate learning model for a certain user may not be appropriate for another user. Even for the same user, an appropriate learning model may become inappropriate due to changes in the user's emotion etc. The learning model being inappropriate means that the accuracy of the response to the user speech is low. In other words, the leaning model being inappropriate means that the tempo and the rhythm of the conversation are not good. Since the suitable tempo and rhythm of the conversation may vary depending on the user, it is possible that the learning model that is appropriate for one user may not be suitable for another user. Further, even for the same user, the suitable tempo and rhythm of the conversation may vary depending, on changes in his/her emotion or the like, and the learning model that was appropriate before may be no longer appropriate. When the accuracy of the response to the learning model is low, the robot executes a “speech response” when a “silent response” should be executed for certain user speech, or the robot executes a “silent response” when a “speech response” should be executed for certain user speech.
On the other hand, in this embodiment, it is possible to switch the learning model to an appropriate one in accordance with the user state. Thus, the voice interaction system 1 according to this embodiment can appropriately handle a situation to effectively prevent a response error from occurring. That is, the voice interaction system 1 according to this embodiment makes it possible to improve the response accuracy.
Next, an outline of a method, of genera the learning model will be described.
In the example shown in
In the example of
Further, the user state of the user A is detected. The user state may be determined by, for example, an operator or may be automatically detected by the aforementioned function such as the user state detection unit 140. Accordingly, one of user states #1 to #N is associated with the sample data. The symbol N, which is an integer equal to or larger than two, indicates the number of user states. This N may correspond to the number of learning models.
Moreover, since a length of the user speech “I listened to it by myself” is 1.5 seconds, “1.5” is input to a component of the feature vector (vi7 in
The boundaries of the correct labels “A”, “B”, and “C” of the learning models are defined depending on the user state. When, for example, the user state is “activeness”, as the learning model corresponds to the user state in which the degree of activeness is large, the probability that “speak” is selected may be reduced and the probability that “silent” is selected may increase. That is, the degree of activeness in a conversation being large means that the user tends to speak actively. In this case, the voice interaction system 1 is adjusted to speak less frequently in order to prevent the speech of the voice interaction system 1 from overlapping that of the user. On the other hand, when the degree of activeness in the conversation is small, this means that the user tends not to speak very actively. In this case, the voice interaction system 1 is adjusted to speak more frequently in order to suppress a long silence.
Further, the learning model corresponds to the user state detected by the aforementioned user state detection unit 140. When, for example, the user state detection unit 140 detects “the degree of activeness” as the user state, a plurality of learning models arc provided for each degree of activeness. Further, when the user state detection unit 140 detects “the identification information on the user” as the user state, a plurality of learning models are provided for each of the pieces of identification information on the user (the user A, the user B, . . . , the user N etc.)
Next, the response determination unit 120 determines a response to the user speech according to the extracted feature vector using the current learning model (the learning model stored in the selected model storage unit 108) (Step S110). The response execution unit 130 executes the response determined in S110 as described above (Step S120).
When the output is the “silent response” (“silent” in S114), the response determination unit 120 determines to execute the silent response (Step S116A). That is, the response determination unit 120 determines not to do anything for the user speech corresponding to this feature vector. When the output is the “nod response” (“nod” in S114), the response determination unit 120 determines to execute the nod response (Step S116B). That is, the response determination unit 120 determines to operate the manipulator 8 in such a way that the neck part of the robot is swung vertically for the user speech corresponding to this feature vector. When the output is the “speech response” (“speak” in S114), the response determination unit 120 determines to execute speech response (Step S116C). That is, the response determination unit 120 determines to operate the speaker 4 to output the system speech for the user speech corresponding to this feature vector.
Next, as described above, the user state detection unit 140 detects the user state (Step S130). The learning model selection unit 150 selects the learning model that corresponds to the user state detected by the processing of S130 (Step S140). To be more specific, when the current learning model is different from the one that corresponds to the detected user state, the learning model selection unit 150 switches the current learning model to the learning model that corresponds to the detected user state. On the other hand, when the current learning model is the one that corresponds to the detected user state, the learning model selection unit 150 does not change the learning model. As described above, the learning model selection unit 150 according to the first embodiment is configured to select a new learning model in accordance with the user state, whereby it becomes possible to select the learning model that can further improve the accuracy of a response.
Hereinafter, specific examples of the user state will be explained. The first example is an example in a case in which the user state is the identification information on the user. The second example is an example in a case in which the user state is the degree of user's activeness in a conversation. The third example is an example in a case in which the user state is a degree of the user's emotion. The fourth example is an example in a case in which the user state is a degree of the user's health condition. The fifth example is an example in which the user state is a degree of the user's awakening state.
The user state detection unit 140 performs face recognition processing on the image to detect identification information on the user (Step S134A). To be more specific, the user state detection unit 140 determines, for example, the face area of the user from the image, performs detection of face feature points, and determines feature point positions of the face such as the eyes, the nose, the ends of the mouth and the like. Then the user state detection unit 140 normalizes the position and the size of the face area using the feature point position, and then performs face collation processing with the image of the person that has been registered in advance. Accordingly, the user state detection unit 140 acquires identification information on the collated person.
Next, the learning model selection unit 150 selects the learning model that corresponds to the detected identification information (Step S142A). It is assumed that, a plurality of learning models are stored in the learning model database 160 in advance for each of the pieces of identification information on the user. When, for example, the identification information on the “user A” has been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to the “user A”.
As described above, the voice interaction system 1 according to the first example has a conversation using the learning model that is suitable for the user, whereby this voice interaction system 1 is able to execute a response in accordance with the user who has a conversation with this voice interaction system 1. Therefore, the voice interaction system 1 according to the first example is capable of appropriately handling a situation so as to effectively prevent a response error from occurring. Further, a response error occurring means that the tempo or the rhythm of the current conversation is not suitable for the user. The voice interaction system 1 according to the first example selects the learning model that corresponds to the user, whereby it becomes possible to make the tempo or the rhythm of the conversation suitable for this user.
Further, in the first example, when the learning model is generated, the identification information on the user is associated as the user state. In other words, a plurality of learning models are generated for each of the pieces of identification information on the user. When the learning model is generated, the operator inputs, for example, the identification information on the user, whereby the sample data and the identification information on the user are associated with each other. Accordingly, the sample data is classified for each of the pieces of identification information on the user, and a plurality of pieces of learning data are generated by machine learning using the sample data that has been classified. Accordingly, for example, the learning model that corresponds to the user A, the learning model that corresponds to the user B, and the learning model that corresponds to the user C are generated.
While the user is identified by face recognition processing using images in the aforementioned examples, the method of identifying the user is not limited to this method. The user who made a speech may be identified by performing speaker recognition processing (voice recognition, processing) on the user speech. Further, the user may be identified by inputting the identification information (ID) on the user.
The user state detection unit 140 detects the degree of activeness that corresponds to the user speech percentage Rs (Step S134B). To be more specific, the user state detection unit 140 stores a table illustrated in
Next, the learning model selection unit 150 selects the learning model that corresponds to the degree of activeness that has been detected (Step S142B). It is assumed that the plurality of learning models are stored in the learning model database 160 in advance for each degree of the user's activeness. When, for example, “the degree of activeness #1” has been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to “the degree of activeness #1”. Further, when “the degree of activeness #4” has been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to “the degree of activeness #4”.
While the degree of the user's activeness is determined in accordance with the user speech percentage in the aforementioned description, the degree of the user's activeness may be determined in accordance with the amount of speech given by, the user. To be more specific, the user state detection unit 140 acquires the amount of user speech [minutes] in the past T minutes (S132B). The user state detection unit 140 detects the degree of activeness that corresponds to the amount of user speech (S134B). In this case, in a way similar to that in the table illustrated in
As described above, the voice interaction system 1 according to the second example has a conversation using the learning model that is suitable for the degree of user's activeness in the conversation, whereby this voice interaction system is able to execute a response in accordance with the activeness of the user who has a conversation with this voice interaction system. Accordingly, the voice interaction system 1 according to the second example is capable of appropriately handling a situation so as to effectively prevent a response error from occurring. Further, a response error occurring means that the tempo or the rhythm of the current conversation is not suitable for the degree of the user's activeness. The voice interaction system 1 according to the second example is able to make the tempo or the rhythm of the conversation suitable for the degree of the user's activeness by selecting the learning model that corresponds to the degree of the user's activeness. Further, the degree of user's activeness in the conversation may vary depending on the topic or the like of the conversation. The voice interaction system 1 according to the second example is able to change the learning model in accordance with the change in the degree of activeness.
Further, in the second example, when the learning model is generated, the degree of the user's activeness is associated as the user state. In other words, a plurality of learning models are generated for each degree of activeness. When the learning models are generated, the operator inputs, for example, the degree of the user's activeness during the conversation, whereby the sample data and the degree of the user's activeness are associated with each other. Further, also when the learning models are generated, as shown in
Accordingly, the sample data is classified for each degree of the user's activeness, and a plurality of pieces of learning data are generated by machine learning using the sample data that has been classified. Accordingly, for example, the learning model that corresponds to the degree of activeness #1, the learning model that corresponds to the degree of activeness #2, the learning model that corresponds to the degree of activeness #3, and the learning model that corresponds to the degree of activeness #4 are generated.
As described above, the boundaries of the correct labels “A (silent)”, “B (nod)”, and “C (speak)” of the learning models are defined depending on the user state. When the user state is “activeness”, as the learning model corresponds to the user state in which the degree of activeness is large, the probability that “speak” is selected may be reduced and the probability that “silent” is selected may be increased. That is, the learning model is generated in such a way that the probability that “A (silent)” is selected in the learning model #4 (degree #4) becomes higher than the probability that “A (silent)” is selected in the learning model #1 (degree #1). Accordingly, during a conversation with the user whose degree of activeness is large, the voice interaction system 1 may be adjusted to speak less frequently in order to prevent overlapping of speeches. Further, during a conversation with the user whose degree of activeness is small, the voice interaction system 1 may be adjusted to speak more frequently in order to prevent a long, silence.
While the degree of user's activeness in the conversation is detected using the user speech percentage or the amount of user speech in the aforementioned examples, the method of detecting the degree of the user's activeness is not limited to this method. The user state detection unit 140 may detect the degree of activeness by acquiring, for example, an image of the user. To be more specific, the user state detection unit 140 may determine the user's activeness by analyzing the user's expression and the line of sight indicated in a face image of the user, and digitize the degree of activeness. Further, the user state detection unit 140 may detect the degree of activeness by acquiring, for example, user speech. To be more specific, the user state detection unit 140 may analyze the prosody of the user speech, determine the user's activeness, and digitize the degree of activeness. As described above, however, by determining the degree of activeness using the user speech percentage or the amount of user speech, it is possible to determine the degree of the user's activeness more accurately. Accordingly, by using the user speech percentage or the amount of user speech, the voice interaction system 1 according to the second example is able to handle a situation more appropriately in order to prevent a response error from occurring.
The user state detection unit 140 acquires the face image of the user from, the detection device 6, which is a camera (Step S132C). The user state detection unit 140 analyzes the face image and detects the degree of the user's emotion (joy) from his/her expression, the line of sight and the like (Step S134C). For example, the user state detection unit 140 may digitize the user's emotion (joy) using artificial intelligence such as “Affdex” or “Emotion API”. Then the user state detection unit 140 may detect the degree of emotion using a table in which the numerical value indicating the emotion and the degree of emotion are associated with each other as illustrated in
Next, the learning model selection unit 150 selects the learning model that corresponds to the degree of the emotion (joy) that has been detected (Step S142C). It is assumed that a plurality of learning models are stored in the learning model database 160 in advance for each degree of the user's emotion. When, for example, “the degree of emotion (joy) #1” has, been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to the “the degree of emotion (joy) #1”. Further, when the “the degree of emotion (joy) #4” has been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to the “the degree of emotion (joy) #4”.
As described above, the voice interaction system 1 according to the third example has a conversation using the learning model that is suitable for the degree of user's emotion in the conversation, whereby this voice interaction system 1 is able to make a response in accordance with the emotion of the user who has a conversation with this voice interaction system 1. Accordingly, the voice interaction system 1 according to the third example is capable of appropriately handling a situation so as to effectively prevent a response error from occurring. Further, a response error occurring means that the tempo or the rhythm of the current conversation is not suitable for the degree of the user's emotion. The voice interaction system 1 according to the third example is able to make the tempo or the rhythm of the conversation suitable for the degree of the user's emotion by selecting the learning model that corresponds to the degree of the user's emotion. Further, the degree of user's emotion in the conversation may vary depending on the topic or the like of the conversation. The voice interaction system 1 according to the third example is able to change the learning model depending on the change in the degree of emotion.
Further, in the third example, when the learning model is generated, the degree of the user's emotion is associated as the user state. In other words, a plurality of learning models are, for example, generated for each degree of emotion. When the learning model is generated, the operator inputs the degree of the user's emotion during the conversation, whereby the sample data and the degree of the user's emotion are associated with each other. Further, also when the learning model is generated, the degree of the user's emotion during the conversation may be determined using the face image of the user.
Accordingly, the sample data is classified for each degree of the user's emotion and a plurality of pieces of learning data are generated by machine learning using the sample data that has been classified. Accordingly, for example, the learning model that corresponds to the degree of emotion #1, the learning model that corresponds to the degree of emotion #2, the learning, model that corresponds to the degree of emotion #3, and the learning model that corresponds to the degree of emotion #4 are generated.
While the degree of the user's emotion is detected using the face image of the user in the aforementioned examples, the method of detecting the degree of the user's emotion is not limited to this method. The user state detection unit 140 may detect the degree of emotion by, for example, acquiring the user speech. To be more specific, the user state detection unit 140 may determine the user's emotion by analyzing the prosody of the user speech and digitize the degree of emotion.
Further, in the aforementioned examples, the user state detection unit 140 detects the degree of emotion. The user state detection unit 140 may detect, however, the type of the emotion, that is, joy, sorrow, anger, surprise and the like. To be more specific, the user state detection unit 140 detects the numerical values indicating joy, sorrow, anger, and surprise, respectively. Then the user state detection unit 140 may detect the emotion (e.g., “anger”) that corresponds to the largest value among these numerical values as the user's emotion. In this case, a plurality of learning models are provided for each type of emotion. Then the learning model selection unit 150 may select the learning model in accordance with the type of the emotion that has been detected (e.g., “anger”).
Further, the user state detection unit 140 may detect the degree for each type of emotion. In this case, the learning model database 160 may store, for example, a learning model in a case in which the degree of anger is X1 and the degree of surprise is Y1 a learning model in a ease in which the degree of anger is X1 and the degree of surprise is Y2, a learning model in a case in which the degree of anger is X2 and the degree of surprise is Y1, and a learning model in a case in which the degree of anger is X2 and the degree of surprise is Y2. Then the learning model selection unit 150 may select the learning model that corresponds to the degree of anger and the degree of surprise that have been detected.
The user state detection unit 140 acquires biological system parameters of the user from the detection device 6 which is a biometric sensor (Step S132D). The biological system parameters are, for example, heart rate. The user state detection unit 140 detects the degree of the user's health condition from the biological system parameters (Step S134D). The user state detection unit 140 may detect the degree of the health condition by using, for example, a table in which the numerical value (heart rate) indicating the health condition and the degree of the health condition are associated with each other, as illustrated in
Next, the learning model selection, unit 150 selects the learning model that corresponds to the degree of the health condition (heart rate) that has been detected (Step S142D). It is assumed that the plurality of learning models are stored in the learning model database 160 in advance for each degree of the user's health condition, When, for example, “the degree of the health condition (heart rate) #1” has been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to “the degree of the health condition (heart rate) #1”. Further, when “the degree of the health condition (heart rate) #4” has been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to “the degree of the health condition (heart rate) #4”.
As described above, the voice interaction system 1 according to the fourth example has a conversation with the user using a learning model that is suitable for the degree of the user's health condition, whereby this voice interaction system 1 is able to execute a response in accordance with the health condition of the user who has a conversation. Accordingly, the voice interaction system 1 according to the fourth example is capable of appropriately handling a situation so as to effectively prevent a response error from occurring. Further, a response error occurring means that the tempo or the rhythm of the current conversation is not suitable for the degree of the user's health condition. The voice interaction system 1 according to the fourth example selects the learning model that corresponds to the degree of the user's health condition, whereby it becomes possible to make the tempo or the rhythm of the conversation suitable for the degree of the user's health condition. Further, the degree of heart rate or the like of the user may vary depending on the topic or the like of the conversation. The voice interaction system 1 according to the fourth example is able to change the learning model depending on the change in the degree of the health condition such as heart rate.
Further, in the fourth example, when the learning model is generated, the degree of the user's health condition is associated as the user state. In other words, a plurality of learning models are generated for each degree of the health condition. When the learning model is generated, the degree of the user's health condition during the conversation is input using, for example, a biometric sensor, whereby the sample data and the degree of the user's health condition are associated with each other.
Accordingly, the sample data is classified for each degree of the user's health condition, and a plurality of pieces of learning data are generated by machine learning using the sample data that has been classified. Accordingly, for example, the learning model that corresponds to the degree of the health condition #1, the learning model that corresponds to the degree of the health condition #2, the learning model that corresponds to the degree of the health condition #3, and the learning model that corresponds to the degree of the health condition #4 are generated.
While the degree of the user's health condition is detected using the biometric sensor in the aforementioned examples, the method of detecting the degree of the user's health condition is not limited to this method. The user state detection unit 140 may detect the degree of the user's health condition by acquiring, for example, the face image of the user from the detection device 6, which is a camera. In this case, the user state detection unit 140 may detect the face color (red, blue, white, yellow, or black) of the user by analyzing the face image. Then the user state detection unit 140 may detect the health condition depending on which color of red, blue, white, yellow, and black the color of the face of the user is close to. In this case, a plurality of learning models are stored for each color of the face of the user.
Further, the user state detection unit 140 may determine whether the user's health condition is good or poor, or the degree of fatigue of the user from the plurality of biological system parameters (heart rate, blood pressure, and body temperature). Further, the user state detection unit 140 may determine whether each of the heart rate, the blood pressure, and the body temperature is within a predetermined normal range, and determine the degree of the health condition (whether the health condition is good or poor) depending on the number of biological system parameters that have fallen outside the normal range.
The user state detection unit 140 acquires the biological system parameters of the user from the detection device 6, which is a camera or a biometric sensor (Step S132E). The biological system parameters are, for example, at least one of an eye blink, heartbeat, and brain waves. The eye blink may be acquired by analyzing the face image of the user acquired from a camera. The heartbeat and the brain waves may be acquired by respectively using a heart rate meter and an electroencephalograph, which are biometric sensors.
The user state detection unit 140 detects the degree of the user's awakening state from the biological system parameters (Step S134E). The user state detection unit 140 calculates, for example, the awakening degree from the aforementioned biological system parameters. The user state detection unit 140 may calculate, for example, the awakening degree from the intervals between eye blinks, the time during which the user opens his/her eyes during eyeblink, the opening degree of the eyes or the like. Then the user state detection unit 140 may detect the degree of the awakening state using a table in which the awakening degree and the degree of the awakening state are associated with each other as illustrated in
Next, the learning model selection unit 150 selects the learning model that corresponds to the degree of the awakening state that has been detected (Step S142E). It is assumed that a plurality of learning models are stored in advance in the learning model database 160 for each degree of the user's awakening state. When, for example, “the degree of the awakening state #1” has been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to “the degree of the awakening state #1”. Further, when “the degree of the awakening state #4” has been detected by the user state detection unit 140, the learning model selection unit 150 selects the learning model that corresponds to “the degree of the awakening state #4”.
As described above, the voice interaction system 1 according to the fifth example has a conversation using the learning model adapted for the degree of the user's awakening state, whereby the above voice interaction system 1 is able to make a response in accordance with the awakening state of the user who has a conversation. Accordingly, the voice interaction system 1 according to the fifth example is capable of appropriately handling a situation so as to effectively prevent a response error from occurring. Further, a response error occurring means that the tempo or the rhythm of the current conversation is not suitable for the degree of the user's awakening state. The voice interaction system 1 according to the fifth example is able to make the tempo or the rhythm of the conversation suitable for the degree of the user's awakening state by selecting the learning model that corresponds to the degree of the user's awakening state. Further, the awakening degree of the user may vary depending on the topic or the like of the conversation. The voice interaction system 1 according to the fifth example is able to change the learning model in accordance with the change in the awakening degree.
Further, in the fifth example, when the learning model is generated, the degree of the user's awakening state is associated as the user state. In other words, a plurality of learning models are generated for each degree of the awakening state. When the learning model is generated, the degree of the user's awakening state during a conversation is input using, for example, a camera or a biometric sensor, whereby the sample data and the degree of the user's awakening state are associated with each other.
Accordingly, the sample data is classified for each degree of the user's awakening state, and a plurality of pieces of learning data are generated by machine learning, using the sample data that has been classified. Accordingly, for example, the learning model that corresponds to the degree of the awakening state #1, the learning model that corresponds to the degree of the awakening state #2, the learning model that corresponds to the degree of the awakening state #3, and the learning model that corresponds to the degree of the awakening state #4 are generated.
While the degree of the user's awakening state is detected using a camera or a biometric sensor in the aforementioned examples, the method of detecting the degree of the user's awakening state is not limited to this method. The user state detection unit 140 may detect the degree of the awakening state by acquiring the user speech. To be more specific, the user state detection unit 140 may analyze the prosody of the user speech, determine the user's awakening state, and digitize the degree of the awakening state.
Next, a second embodiment will be explained. The second embodiment is different from the first embodiment in that the voice interaction system 1 generates a plurality of learning models. Since the hardware configuration of the voice interaction system 1 according to the second embodiment is substantially similar to the hardware configuration of the voice interaction system 1 according to the first embodiment shown in
The learning model generation apparatus 200 generates a plurality of learning models by a method described later. The plurality of learning models generated by the learning model generation apparatus 200 are stored in the learning model database 160. The learning models may either be automatically stored by the learning model generation apparatus 200 or be manually stored by a worker such as an operator.
The learning model generation apparatus 200 is not necessarily be physically integrated with the other components. That is, an apparatus (such as a robot) in which the other components are provided and an apparatus (such as a computer) which the learning model generation, apparatus 200 is provided may not be the same. A specific configuration of the learning model generation apparatus 200 will be explained below. The processing of the learning model generation apparatus 200 (processing shown in
The speech acquisition unit 212 acquires user speech, which is speech given by at least one desired user, by having a conversation with the desired user, in a way similar to that of the processing of S102 in
Next, the sample data generation unit 216 generates sample data in which a correct label indicating a response to the user speech and a feature vector are associate with each other (Step S206). To be more specific, as described above with reference to
The user state acquisition unit 218 acquires the user state, which is a desired user's state when the user has made a speech, and associates the acquired user state with the sample data that corresponds to the user speech (Step S210). To be more specific, the user state acquisition unit 218 may acquire the user state of the desired user using the image of the user, the user speech, the biological system parameters or the like, as described with reference to
The learning, model generation apparatus 200 determines whether to end acquisition of the user speech (Step S212). When the acquisition of the user speech should be continued (NO in S212), the learning model generation apparatus 200 repeats the processing of S202-S210. On the other hand, when the acquisition of the user speech is ended as a sufficient amount of sample data has been acquired (YES in S212), the sample data classification unit 220 classifies the sample data for each user state, as described with reference to
As described above, the learning model generation apparatus 200 according to the second embodiment classifies the sample data for each user state and generates a plurality of learning models by machine learning, whereby this learning model generation apparatus is able to generate a plurality of learning models that correspond to the user state. Accordingly, the voice interaction system 1 is able to select the learning model in accordance with the user state, as described above.
The present disclosure is not limited to the aforementioned embodiments and may be changed as appropriate without departing from the spirit of the present disclosure. For example, in the aforementioned flowcharts, the order of the plurality of processing may be changed as appropriate. Further, in the aforementioned flowcharts, one of the plurality of processing may be omitted. For example, the processing of S130 in
Further, the first to fifth examples of the user state described with reference to
While the feature vector (
Further, while the example in which the voice interaction system 1 is installed in a robot is illustrated in the aforementioned embodiments, this configuration is merely an example. The voice interaction system 1 may be installed in an information terminal such as a smartphone or a tablet terminal. In this case, when the “nod response” is performed, a video image in which a person, an animal, a robot or the like nods may be, for example, displayed on the display screen of the information terminal instead of operating the manipulator 8.
In the above examples, the program can be stored and provided to a computer using any type of non-transitory computer readable media. Non-transitory computer readable media include any type of tangible storage media. Examples of non-transitory computer readable media include magnetic storage media (such as flexible disks, magnetic tapes, hard disk drives, etc.), optical magnetic storage media (e.g. magneto-optical disks), CD-ROM, CD-R, CD-R/W, and semiconductor memories (such as mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, RAM, etc.). The program may be provided to a computer using any type of transitory computer readable media. Examples of transitory computer readable media include electric signals, optical signals, and electromagnetic waves. Transitory computer readable media can provide the program to a computer via a wired communication line (e.g. electric wires, and optical fibers) or a wireless communication line.
From the disclosure thus described, it will be obvious that the embodiments of the disclosure may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure, and all such modifications as would be obvious to one skilled in the art are intended for inclusion within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2018-162774 | Aug 2018 | JP | national |