The present invention relates to a user interface system and a user interface control device capable of a voice operation.
In a device having a user interface capable of a voice operation, one button for the voice operation is usually prepared. When the button for the voice operation is pressed down, a guidance “please talk when a bleep is heard” is played, and a user utters (voice input). In the case where the user utters, a predetermined utterance keyword is uttered according to predetermined procedures. At the time, the voice guidance is played from the device, and a target function is executed after an interaction with the device is performed several times. Such a device has a problem that the user cannot memorize the utterance keyword or the procedures, which makes it impossible to perform the voice operation. In addition, the device has a problem that it is necessary to perform the interaction with the device a plurality of times, so that it takes time to complete the operation.
Accordingly, there is a user interface in which execution of a target function is allowed with one utterance without memorization of procedures when a plurality of buttons are associated with voice recognitions related to functions of the buttons (Patent Literature 1).
Patent Literature 1: WO 2013/015364
However, there is a limitation that the number of buttons displayed on a screen corresponds to the number of entrances to a voice operation, and hence a problem arises in that many entrances to the voice operation cannot be arranged. In addition, in the case where many entrances to the voice operation are arranged, a problem arises in that the number of buttons becomes extremely large, so that it becomes difficult to find out a target button.
The present invention has been made in order to solve the above problems, and an object thereof is to reduce an operational load of a user who performs a voice input.
A user interface system according to the invention includes: an estimator that estimates an intention of a voice operation of a user, based on information related to a current situation; a candidate selector that allows the user to select one candidate from among a plurality of candidates for the voice operation estimated by the estimator; a guidance output processor that outputs a guidance to request a voice input of the user concerning the candidate selected by the user; and a function executor that executes a function corresponding to the voice input of the user to the guidance.
A user interface control device according to the invention includes: an estimator that estimates an intention of a voice operation of a user, based on information related to a current situation; a guidance generator that generates a guidance to request a voice input of the user concerning one candidate that is determined based on a selection by the user from among a plurality of candidates for the voice operation estimated by the estimator; a voice recognizer that recognizes the voice input of the user to the guidance; and a function determinator that outputs instruction information such that a function corresponding to the recognized voice input is executed.
A user interface control method according to the invention includes the steps of: estimating a voice operation intended by a user, based on information related to a current situation; generating a guidance to request a voice input of the user concerning one candidate that is determined based on a selection by the user from among a plurality of candidates for the voice operation estimated in the estimating step; recognizing the voice input of the user to the guidance; and outputting instruction information such that a function corresponding to the recognized voice input is executed.
A user interface control program according to the invention causes a computer to execute: estimation processing that estimates an intention of a voice operation of a user, based on information related to a current situation; guidance generation processing that generates a guidance to request a voice input of the user concerning one candidate that is determined based on a selection by the user from among a plurality of candidates for the voice operation estimated by the estimation processing; voice recognition processing that recognizes the voice input of the user to the guidance; and processing that outputs instruction information such that a function corresponding to the recognized voice input is executed.
According to the present invention, since an entrance to the voice operation that meets the intention of the user is provided in accordance with the situation, it is possible to reduce an operational load of the user who performs the voice input.
The estimation section 3 receives information related to a current situation, and estimates a candidate for a voice operation that a user will perform at the present time, that is, the candidate for the voice operation that meets the intention of the user. Examples of the information related to the current situation include external environment information and history information. The estimation section 3 may use both of the information sets or may also use either one of them. The external environment information includes vehicle information such as the current speed of an own vehicle and a brake condition, and information such as temperature, current time, and current position. The vehicle information is acquired with a CAN (Controller Area Network) or the like. In addition, the temperature is acquired with a temperature sensor or the like, and the current position is acquired by using a GPS signal to be transmitted from a GPS (Global Positioning System) satellite. The history information includes, for example, in the past, setting information of a facility set as a destination by a user, and equipment such as a car navigation device, an audio, an air conditioner, and a telephone operated by the user, a content selected by the user in the candidate selection section 5 described later, a content input by voice by the user, and a function executed in the function execution section 10 described later, and the history information is stored together with date and time of occurrence and position information and so on in each of the above setting information, contents, function. Consequently, the estimation section 3 uses for the estimation, the information related to the current time and the current position from the history information. Thus, even in the past information, the information that influences the current situation is included in the information related to the current situation. The history information may be stored in a storage section in the user interface control device or may also be stored in a storage section of a server.
From among a plurality of candidates for the voice operation estimated by the estimation section 3, the candidate determination section 4 extracts some candidates by the number that can be presented by the candidate selection section 5, and outputs the extracted candidates to the candidate selection section 5. Note that the estimation section 3 may assign a probability that matches the intention of the user to each of the functions. In this case, the candidate determination section 4 may appropriately extract the candidates by the number that can be presented by the candidate selection section 5 in descending order of the probabilities. In addition, the estimation section 3 may output the candidates to be presented directly to the candidate selection section 5. The candidate selection section 5 presents to the user, the candidates for the voice operation received from the candidate determination section 4 such that the user can select a target of the voice operation desired by the user. That is, the candidate selection section 5 functions as an entrance to the voice operation. Hereinbelow, the description will be given on the assumption that the candidate selection section 5 is a touch panel display. For example, in the case where the maximum number of candidates that can be displayed on the candidate selection section 5 is three, three candidates estimated by the estimation section 3 are displayed in descending order of the likelihoods. When the number of candidates estimated by the estimation section 3 is one, the one candidate is displayed on the candidate selection section 5.
The user selects the candidate that the user desires to input by voice from among the displayed candidates. With regard to a selection method, the candidate displayed on the touch panel display may be appropriately touched and selected. When the candidate for the voice operation is selected by the user, the candidate selection section 5 transmits a selected coordinate position on the touch panel display to the candidate determination section 4, and the candidate determination section 4 associates the coordinate position with the candidate for the voice operation, and determines a target in which the voice operation is to be performed. Note that the determination of the target of the voice operation may be performed in the candidate selection section 5, and information on the selected candidate for the voice operation may be configured to be output directly to the guidance generation section 6. The determined target of the voice operation is accumulated as the history information together with the time information, position information and the like, and is used for future estimation of the candidate for the voice operation.
The guidance generation section 6 generates a guidance that requests the voice input to the user in accordance with the target of the voice operation determined in the candidate selection section 5. The guidance is preferably provided in a form of a question, and the user answers the question and the voice input is thereby allowed. When the guidance is generated, a guidance dictionary that stores a voice guidance, a display guidance, or a sound effect that is predetermined for each candidate for the voice operation displayed on the candidate selection section 5 is used. The guidance dictionary may be stored in the storage section in the user interface control device or may also be stored in the storage section of the server.
The guidance output section 7 outputs the guidance generated in the guidance generation section 6. The guidance output section 7 may be a speaker that outputs the guidance by voice or may also be a display section that outputs the guidance by using letters. Alternatively, the guidance may also be output by using both of the speaker and the display section. In the case where the guidance is output by using letters, the touch panel display that is the candidate selection section 5 may be used as the guidance output section 7. For example, as shown in
The voice recognition section 8 performs voice recognition of the content of utterance of the user to the guidance of the guidance output section 7. At this point, the voice recognition section 8 performs the voice recognition by using a voice recognition dictionary. The number of the voice recognition dictionaries may be one, or the dictionary may be switched according to the target of the voice operation determined in the candidate determination section 4. When the dictionary is switched or narrowed, a voice recognition rate is improved. In the case where the dictionary is switched or narrowed, information related to the target of the voice operation determined in the candidate determination section 4 is input not only to the guidance generation section 6 but also to the voice recognition section 8. The voice recognition dictionary may be stored in the storage section in the user interface control device or may also be stored in the storage section of the server.
The function determination section 9 determines the function corresponding to the voice input recognized in the voice recognition section 8, and transmits instruction information to the function execution section 10 to the effect that the function is executed. The function execution section 10 includes the equipment such as the car navigation device, audio, air conditioner, or telephone in the automobile, and the functions correspond to some functions to be executed by the pieces of equipment. For example, in the case where the voice recognition section 8 has recognized the user's voice input of “Yamada”, the function determination section 9 transmits the instruction information to a telephone set as one included in the function execution section 10 to the effect that a function “call Yamada” is executed. The executed function is accumulated as the history information together with the time information, position information and the like, and is used for the future estimation of the candidate for the voice operation.
The estimation section 3 estimates the candidate for the voice operation that the user will perform, that is, the voice operation that the user will desire to perform by using the information related to the current situation (the external environment information, operation history, and the like) (ST101). In the case where the user interface system is used as, for example, a vehicle-mounted device, the estimation operation may be started at the time an engine is started, and may be periodically performed, for example, every few seconds or may also be performed at a timing when the external environment is changed. Examples of the voice operation to be estimated include the following operations. In the case of a person who often makes a telephone call from a parking area of a company when he finishes his work and goes home, in a situation in which the current position is a “company parking area” and the current time is “night”, the voice operation of “call” is estimated. The estimation section 3 may estimate a plurality of candidates for the voice operation. For example, in the case of a person who often makes a telephone call, sets a destination, and listens to the radio when he goes home, the estimation section 3 estimates the functions of “call”, “set a destination”, and “listen to music” in descending order of the probabilities.
The candidate selection section 5 acquires information on the candidates for the voice operation to be presented from the candidate determination section 4 or the estimation section 3, and presents the candidates (ST102). Specifically, the candidates are displayed on, for example, the touch panel display.
Next, the candidate determination section 4 or candidate selection section 5 determines what the candidate selected by the user from among the displayed candidates for the voice operation is, and determines the target of the voice operation (ST103).
Next, the guidance generation section 6 generates the guidance that requests the voice input to the user in accordance with the target of the voice operation determined by the candidate determination section 4. Subsequently, the guidance output section 7 outputs the guidance generated in the guidance generation section 6 (ST104).
As shown in
The voice recognition section 8 performs the voice recognition by using the voice recognition dictionary (ST105). At this point, the voice recognition dictionary to be used may be switched to a dictionary related to the voice operation determined in ST103. For example, in the case where the voice operation of “call” is selected, the dictionary to be used may be switched to a dictionary in which words related to “telephone” such as the family name of a person and the name of a facility of which the telephone numbers are registered are stored.
The function determination section 9 determines the function corresponding to the recognized voice, and transmits an instruction signal to the function execution section 10 to the effect that the function is executed. Subsequently, the function execution section 10 executes the function based on the instruction information (ST106). For example, when the voice of “Yamada” is recognized in the example in
In the above description, it is assumed that the candidate selection section 5 is the touch panel display, and that the presentation section that notifies the user of the estimated candidate for the voice operation, and the input section that allows the user to select one candidate are integrated with each other. But the configuration of the candidate selection section 5 is not limited thereto. As described below, the presentation section that notifies the user of the estimated candidate for the voice operation, and the input section that allows the user to select one candidate may also be configured separately. For example, the candidate displayed on the display may be selected by a cursor operation with a joystick or the like. In this case, the display as the presentation section and the joystick as the input section and the like constitute the candidate selection section 5. In addition, a hard button corresponding to the candidate displayed on the display may be provided in a handle or the like, and the candidate may be selected by a push of the hard button. In this case, the display as the presentation section and the hard button as the input section constitute the candidate selection section 5. Further, the displayed candidate may also be selected by a gesture operation. In this case, a camera or the like that detects the gesture operation is included in the candidate selection section 5 as the input section. Furthermore, the estimated candidate for the voice operation may be output from a speaker by voice, and the candidate may be selected by the user through the button operation, joystick operation, or voice operation. In this case, the speaker as the presentation section and the hard button, the joystick, or a microphone as the input section constitute the candidate selection section 5. When the guidance output section 7 is the speaker, the speaker can also be used as the presentation section of the candidate selection section 5.
In the case where the user notices an erroneous operation after the candidate for the voice operation is selected, it is possible to re-select the candidate from among a plurality of the presented candidates. For example, an example in the case where three candidates shown in
As described above, according to the user interface system and the user interface control device in Embodiment 1, it is possible to provide the candidate for the voice operation that meets the intention of the user in accordance with the situation, that is, an entrance to the voice operation, so that an operational load of the user who performs the voice input is reduced. In addition, it is possible to prepare many candidates for the voice operation corresponding to subdivided purposes, and hence it is possible to cope with various purposes of the user widely.
In Embodiment 1 described above, the example in which the function desired by the user is executed by the one voice input of the user to the guidance output from the guidance output section 7 has been described. In Embodiment 2, a description will be given of the user interface control device and the user interface system capable of execution of the function with a simple operation even in the case where the function to be executed cannot be determined by the one voice input of the user, like the case where a plurality of recognition results by the voice recognition section 8 are present or the case where a plurality of functions corresponding to the recognized voice are present, for example.
In the present embodiment, a point different from those in Embodiment 1 will be described. The recognition judgment section 11 judges whether or not the voice input recognized as the result of the voice recognition corresponds to one function executed by the function execution section 10, that is, whether or not a plurality of functions corresponding to the recognized voice input are present. For example, the recognition judgment section 11 judges whether the number of recognized voice inputs is one or more than one. In the case where the number of recognized voice inputs is one, the recognition judgment section 11 judges whether or not the number of functions corresponding to the voice input is one or more than one.
In the case where the number of recognized voice inputs is one and the number of functions corresponding to the voice input is one, the result of the recognition judgment is output to the function determination section 9, and the function determination section 9 determines the function corresponding to the recognized voice input. The operation in this case is the same as that in Embodiment 1.
On the other hand, in the case where a plurality of voice recognition results are present, the recognition judgment section 11 outputs the recognition results to the function candidate selection section 12. In addition, even when the number of the voice recognition results is one, in the case where a plurality of functions corresponding to the recognized voice input are present, the judgment result (candidate corresponding to the individual function) is transmitted to the function candidate selection section 12. The function candidate selection section 12 displays a plurality of candidates judged in the recognition judgment section 11. When the user selects one from among the displayed candidates, the selected candidate is transmitted to the function determination section 9. With regard to a selection method, the candidate displayed on the touch panel display may be touched and selected. In this case, the candidate selection section 5 has the function of an entrance to the voice operation that receives the voice input when the displayed candidate is touched by the user, while the function candidate selection section 12 has the function of a manual operation input section in which the touch operation of the user directly leads to the execution of the function. The function determination section 9 determines the function corresponding to the candidate selected by the user, and transmits instruction information to the function execution section 10 to the effect that the function is executed.
For example, as shown in
When one candidate is selected from among the plurality of candidates displayed on the function candidate selection section 12 by the user's manual operation, the function determination section 9 determines the function corresponding to the selected candidate, and instructs the function execution section 10 to execute the function. Note that the determination of the function to be executed may be performed in the function candidate selection section 12, and the instruction information may be output directly to the function execution section 10 from the function candidate selection section 12. For example, when “Yamada Taro” is selected, Yamada Taro is called.
In ST205, the voice recognition section 8 performs the voice recognition by using the voice recognition dictionary. The recognition judgment section 11 judges whether or not the recognized voice input corresponds to one function executed by the function execution section 10 (ST206). In the case where the number of the recognized voice inputs is one and the number of the functions corresponding to the voice input is one, the recognition judgment section 11 transmits the result of the recognition judgment to the function determination section 9, and the function determination section 9 determines the function corresponding to the recognized voice input. The function execution section 10 executes the function based on the function determined in the function determination section 9 (ST207).
In the case where the recognition judgment section 11 judges that a plurality of the recognition results of the voice input in the voice recognition section 8 are present, or judges that a plurality of the functions corresponding to one recognized voice input are present, the candidates corresponding to the plurality of functions are presented by the function candidate selection section 12 (ST208). Specifically, the candidates are displayed on the touch panel display. When one candidate is selected from among the candidates displayed on the function candidate selection section 12 by the user's manual operation, the function determination section 9 determines the function to be executed (ST209), and the function execution section 10 executes the function based on the instruction from the function determination section 9 (ST207). Note that, as described above, the determination of the function to be executed may be performed in the function candidate selection section 12, and the instruction information may be output directly to the function execution section 10 from the function candidate selection section 12. When the voice operation and the manual operation are used in combination, it is possible to execute the target function more quickly and reliably than in the case where the interaction between the user and the equipment only by voice is repeated.
For example, as shown in
In the above description, it is assumed that the function candidate selection section 12 is the touch panel display, and that the presentation section that notifies the user of the candidate for the function and the input section for the user to select one candidate are integrated with each other. But the configuration of the function candidate selection section 12 is not limited thereto. Similarly to the candidate selection section 5, the presentation section that notifies the user of the candidate for the function, and the input section that allows the user to select one candidate may be configured separately. For example, the presentation section is not limited to the display and may be the speaker, and the input section may be a joystick, hard button, or microphone.
In addition, in the above description with reference to
Further, the guidance output section may be the speaker, and the candidate selection section 5 and the function candidate selection section 12 may be configured by one display section (touch panel display). Furthermore, the candidate selection section 5 and the function candidate selection section 12 may be configured by one presentation section and one input section. In this case, the candidate for the voice operation and the candidate for the function to be executed are presented by the one presentation section, and the user selects the candidate for the voice operation and selects the function to be executed by using the one input section.
In addition, the function candidate selection section 12 is configured such that the candidate for the function is selected by the user's manual operation, but it may also be configured such that the function desired by the user may be selected by the voice operation from among the displayed candidates for the function or the candidates for the function output by voice. For example, in the case where the candidates for the function of “Yamada Taro”, “Yamada Kyoko”, and “Yamada Atsushi” are presented, it may be configured that “Yamada Taro” is selected by an input of “Yamada Taro” by voice, or that when the candidates are respectively associated with numbers such as “1”, “2”, and “3”, “Yamada Taro” is selected by an input of “1” by voice.
As described above, according to the user interface system and the user interface control device in Embodiment 2, even in the case where the target function cannot be specified by the one voice input, since it is configured that the user can make a selection from among the presented candidates for the function, it is possible to execute the target function with the simple operation.
When a keyword uttered by a user is a keyword having a broad meaning, there are cases where the function cannot be specified to be not executable, or many function candidates are presented, so that it takes time to select the candidate. For example, in the case where the user utters “amusement park” in response to a question of “where do you go?”, since a large number of facilities belong to “amusement park”, it is not possible to specify the amusement park. In addition, when a large number of facility names of the amusement park are displayed as candidates, it takes time for the user to make a selection. Therefore, a feature of the present embodiment is as follows: in the case where the keyword uttered by the user is a word having a broad meaning, a candidate for a voice operation that the user will desire to perform is estimated by the use of an intention estimation technique, the estimated result is specifically presented as the candidate for the voice operation, that is, an entrance to the voice operation, and execution of a target function is configured to be allowed at the next utterance.
In the present embodiment, a point different from those in Embodiment 2 described above will be mainly described.
The recognition judgment section 11 judges whether the keyword recognized in the voice recognition section 8 is a keyword of an upper level or a keyword of a lower level by using the keyword knowledge 14. In the keyword knowledge 14, for example, words as in a table in
For example, in the case where the recognition judgment section 11 recognizes the first voice input as “theme park”, since “theme park” is the word of the upper level, words such as “recreation park”, “zoo”, “aquarium”, and “museum” as the keywords of the lower level corresponding to “theme park” are sent to the estimation section 3. The estimation section 3 estimates the word corresponding to the function that the user will desire to execute from among the words such as “recreation park”, “zoo”, “aquarium”, and “museum” received from the recognition judgment section 11 by using external environment information and history information. The candidate for the word obtained by the estimation is displayed on the candidate selection section 15.
On the other hand, in the case where the recognition judgment section 11 judges that the keyword recognized in the voice recognition section 8 is a word of the lower level leading to the final execution function, the word is sent to the function determination section 9, and the function corresponding to the word is executed by the function execution section 10.
First, as shown in
For example, it is assumed that the voice recognition section 8 has recognized the voice as “theme park”. As shown in
The candidate selection section 15 presents the estimated candidate for the voice operation (ST309). For example, as shown in
When the recognition result of the voice recognition section 8 is the executable keyword of the lower level, the function corresponding to the keyword is executed (ST307). For example, in the case where the user has uttered “Japanese recreation park” in response to the guidance of “which recreation park do you go?”, the function of, for example, retrieving a route to “Japanese recreation park” is executed by the car navigation device as the function execution section 10.
The target of the voice operation determined by the candidate determination section 4 in ST309 and the function executed by the function execution section 10 in ST307 are accumulated in a database (not shown) as the history information together with time information, position information and the like, and are used for future estimation of the candidate for the voice operation.
Although omitted in the flowchart in
In
In addition, in the above description, it is assumed that the candidate selection section 15 is the touch panel display, and that the presentation section that notifies the user of the estimated candidate for the voice operation and the input section for the user to select one candidate are integrated with each other, but the configuration of the candidate selection section 15 is not limited thereto. As described in Embodiment 1, the presentation section that notifies the user of the estimated candidate for the voice operation and the input section for the user to select one candidate may be configured separately. For example, the presentation section is not limited to the display but may also be the speaker, and the input section may also be a joystick, hard button, or microphone.
In addition, in the above description, it is assumed that the keyword knowledge 14 is stored in the user interface control device, but may also be stored in the storage section of the server.
As described above, according to the user interface system and the user interface control device in Embodiment 3, even when the keyword input by the user by voice is the keyword having a broad meaning, when the candidate for the voice operation that meets the intention of the user is re-estimated to thus narrow the candidate, and the narrowed candidate is presented to the user, it is possible to reduce the operational load of the user who performs the voice input.
In each Embodiment described above, it is configured that the candidates for the voice operation estimated by the estimation section 3 are presented to the user. However, in the case where a likelihood of each of the candidates for the voice operation estimated by the estimation section 3 is low, the candidates each having a low probability that matches the intention of the user are to be presented. Therefore, in Embodiment 4, in the case where the likelihood of each of the candidates determined by the estimation section 3 is low, it is adapted that the candidates are presented with converted to a superordinate concept.
In the present embodiment, a point different from those in Embodiment 1 described above will be mainly described.
The estimation section 3 receives the information related to the current situation such as the external environment information and history information, and estimates the candidate for the voice operation that the user will perform at the present time. In the case where the likelihood of each of the candidates extracted by the estimation is low, when a likelihood of a candidate for a voice operation of an upper level for them is high, the estimation section 3 transmits the candidate for the voice operation of the upper level to the candidate determination section 4.
The estimation section 3 estimates the candidate for the voice operation that the user will perform by using the information related to the current situation (the external environment information, history information and the like) (ST401). Next, the estimation section 3 extracts the likelihood of each or the estimated candidate (ST402). When the likelihood of each candidate is high, the flow proceeds to ST404, the candidate determination section 4 determines what the candidate selected by the user is, from among the candidates for the voice operation presented in the candidate selection section 5, and determines the target of the voice operation. Additionally, the determination of the target of the voice operation may be performed in the candidate selection section 5, and information on the selected candidate for the voice operation may be output directly to the guidance generation section 6. The guidance output section 7 outputs the guidance that requests the voice input to the user in accordance with the determined target of the voice operation (ST405). The voice recognition section 8 recognizes the voice input by the user in response to the guidance (ST406), and the function execution section 10 executes the function corresponding to the recognized voice (ST407).
On the other hand, in the case where the estimation section 3 determines that the likelihood of each estimated candidate is low in ST403, the flow proceeds to ST408. An example of such a case includes the case where candidates shown in
Therefore, in Embodiment 4, the likelihood of the voice operation of the upper level of each estimated candidate is calculated. With regard to a calculation method, for example, the likelihoods of the candidates of the lower level that belong to the same voice operation of the upper level are added together. For example, as shown in
Note that, in the above description, it is assumed that the keyword knowledge 14 is stored in the user interface control device, but may also be stored in the storage section of the server.
As described above, according to the user interface system and the user interface control device in Embodiment 4, the candidate for the voice operation of the superordinate concept having a high probability that matches the intention of the user is presented, and hence it is possible to perform the voice input more reliably.
The storage device 20 is, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), or an HDD (Hard Disk Drive). The storage section of the server and the storage section of the user interface control device 2 can be mounted through the storage device 20. In the storage device 20, a program 21 and a file 22 are stored. The program 21 includes programs that execute processing of the individual sections. The file 22 includes data, information, signals and the like of which the input, output, operations and the like are performed by the individual sections. In addition, the keyword knowledge 14 is included in the file 22. Further, the history information, guidance dictionary, or voice recognition dictionary may be included in the file 22.
The processing device 30 is, for example, a CPU (Central Processing Unit). The processing device 30 reads the program 21 from the storage device 20, and executes the program 21. The operations of the individual sections of the user interface control device 2 can be implemented by the processing device 30.
The input device 40 is used for inputs (receptions) of data, information, signals and the like by the individual sections of the user interface control device 2. In addition, the output device 50 is used for outputs (transmissions) of the data, information, signals and the like by the individual sections of the user interface control device 2.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2014/002263 | 4/22/2014 | WO | 00 |