This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2016-058958, filed on Mar. 23, 2016, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein are related to a technology of supporting input of voice.
In recent years, an augmented reality (AR) technology in which, using a display device, such as a head mounted display or the like, an object is superimposed and thus displayed on a captured image has been proposed. For a case where a head mounted display is used, it has been proposed that a command input using voice recognition is used as an input method. Also, it has been proposed to, in order to manage moving picture data, store representative image data of moving pictures, a voice-recognized keyword, and moving image data in association with one another to thus manage indexes.
For example, Japanese Laid-open Patent Publication No. 08-212328, Japanese Laid-open Patent Publication No. 2010-034893, and Japanese Laid-open Patent Publication No. 2006-301757 discuss related art.
According to an aspect of the invention, an information processing system includes circuitry configured to, acquire information identifying a plurality of voice commands associated with each of a plurality of screens to be displayed by a display, identify a first plurality of voice commands of the plurality of voice commands corresponding to a first screen, of the plurality of screens, currently displayed by the display, acquire first sound information captured by a microphone, compare the first sound information to first voice patterns associated with the first plurality of voice commands, and output a first result based on a first comparison between the first sound information to the first voice patterns.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
In one aspect, the present disclosure provides a voice input support program, a head mounted display, a voice input support method, and a voice input support device that are capable of increasing voice recognition accuracy.
Embodiments of a voice input support program, a head mounted display, a voice input support method, and a voice input support device disclosed herein will be described in detail below with reference to the accompanying drawings. Note that the technology disclosed herein is not limited to the specific embodiments illustrated herein. Also, embodiments described below may be combined, as appropriate, to the extent that there is no contradiction.
The HMD 10 and the terminal device 100 are coupled to one another, for example, via a wireless local area network (LAN), such as Wi-Fi Direct (registered trademark) or the like, so as to be mutually communicable with one another. Also, the terminal device 100 and the server 200 are coupled to one another via a network N so as to be mutually communicable with one another. As the network N, a communication network of an arbitrary type, such as the Internet, a LAN, a virtual private network (VPN), or the like, may be employed, whether the network N is a wired or wireless network.
A user wears the HMD 10 with the terminal device 100, and the HMD 10 displays a display screen transmitted from the terminal device 100. For example, a monocular transmission-type HMD may be used as the HMD 10. Note that, for example, each of various types of HMDs, such as a binocular HMD, an immersive HMD, or the like, may be used as the HMD 10. Also, the HMD 10 includes a microphone as an example of an input section in order to receive a voice input made by the user.
When the HMD 10 acquires sound information collected by the microphone, the HMD 10 refers to the storage section that stores a plurality of voice patterns in association with image information and acquires a voice pattern associated with image information displayed on a screen of a terminal. The HMD 10 compares the acquired sound information and the acquired voice pattern to one another and outputs a comparison result. When the output comparison result indicates that the sound information and the voice pattern match, the HMD 10 transmits a voice command ID (identifier) to the terminal device 100. Thus, the HMD 10 may increase voice recognition accuracy.
The terminal device 100 is an information processing device that the user wears to operate and, for example, as the terminal device 100, a mobile communication terminal, such as a tablet terminal, a smartphone, or the like, or the like may be used. The terminal device 100 executes, for example, an AR middle wear (which will be hereinafter also referred to as an “AR middle”) that operates in cooperation with the HMD 10 and a web application (which will be hereinafter also referred to as a “web app”). The AR middle provides a basic function, such as display of AR contents, screen transition in a display screen, an operation menu, or the like to the web app. The web app provides, for example, an operation screen related to equipment inspection or the like to the user. Note that, in the following description, the AR middle and the web app together are also referred to as an AR app. Also, when the AR middle and the web app are distinguished from one another, the AR middle and the web app are described as an “AR middle 100a” and a “web app 100b”.
The server 200 includes, for example, a database that manages the AR contents used for equipment inspection in a certain plant and a database that stores filtering information in each screen of a web app. Note that the filtering information is information in which a voice command ID is associated with a screen, that is, information in which a plurality of voice patterns is associated with image information. In response to a request from the terminal device 100, the server 200 transmits the AR contents to the terminal device 100 via the network N. Also, in response to a request from the terminal device 100, the server 200 transmits the filtering information to the terminal device 100.
In this case, input of a voice command using voice recognition according to the present disclosure is compared to input of a voice command using known voice recognition. In input of a voice command using known voice recognition, even when processing is not associated with a result of voice recognition, voice recognition is performed and, for example, a recognition sound is made to notify the user that voice recognition has been performed. In reality, however, in such a case, there is not a voice command that corresponds to the recognition result, and therefore, no processing is performed, so that the user is not able to know a voice recognition result or a processing result after voice recognition. In contrast, in input of a voice command using voice recognition according to the present disclosure, filtering information is used and, when processing is not associated with a result of voice recognition, filtering is performed, and thus, for example, a recognition sound is not made. Therefore, in input of a voice command using voice recognition according to the present disclosure, the user knows that it is not possible to use, on the current screen, a voice command that was input through voice input.
Notification of filtering information according to the present disclosure will be described.
The HMD 10 performs voice command recognition on sound information input by the user and compares the sound information to a voice pattern included in the filtering information. When, as a result of the comparison, the sound information matches the sound pattern included in the filtering information, the HMD 10 transmits the voice command ID of a matching voice command to the AR middle 100a (Step S3).
The AR middle 100a executes processing of the voice command that corresponds to the received voice command ID (Step S4). Also, when the received voice command ID is the voice command ID of a voice command for executing processing in the web app 100b, the AR middle 100a outputs the voice command ID or the corresponding voice command to the web app 100b (Step S5). Also, when a screen transition occurs in the web app 100b, the AR middle 100a transmits the screen ID of a screen after the transition to the HMD 10 (Step S6). When the HMD 10 receives the screen ID, the HMD 10 starts filtering in voice recognition, based on the filtering information that corresponds to the screen ID.
Next, a configuration of the HMD 10 will be described. As illustrated in
The communication section 11 is realized by, for example, a communication module, such as a wireless LAN or the like, or the like. The communication section 11 is a communication interface that is wirelessly coupled to the terminal device 100, for example, via Wi-Fi Direct (registered trademark), and conducts communication of information with the terminal device 100. The communication section 11 receives filtering information, end information, a display screen, and a screen ID from the terminal device 100. The communication section 11 outputs the filtering information, the end information, the display screen, and the display ID that have been received to the control section 16. Also, the communication section 11 transmits the voice command ID that has been input from the control section 16 to the terminal device 100.
The input section 12 is, for example, a microphone, and collects voice made by the user. As for the input section 12, each of various types of microphones, such as, for example, an electret capacitor microphone or the like, may be used as a microphone. The input section 12 outputs sound information that is collected voice to the control section 16.
The display section 13 is a display device used for displaying various types of information. The display section 13 corresponds to, for example, a display element of a transmission-type HMD in which a video image is projected on a half mirror and through which the user sees an external scene with the video image. Note that the display section 13 may be a display element that corresponds to an immersive HMD, a video see-though HMD, a retina projection HMD, or the like.
The storage section 14 is realized by, for example, a storage device, such as a semiconductor memory device, such as random access memory (RAM), flash memory, or the like. The storage section 14 includes a filtering information storage section 15. Also, the storage section 14 stores information used for processing in the control section 16.
The filtering information storage section 15 stores the filtering information received from the terminal device 100. Note that the filtering information storage section 15 is an example of a voice command dictionary.
“SCREEN ID” is an identifier that identifies a screen that is displayed on the HMD 10. “FILTERING ID” is an identifier that identifies a set of voice commands in a screen that is displayed. Note that the screen ID management table 15a may use, instead of “SCREEN ID”, for example, “APP ID” that identifies the type of the web app 100b. In this case, “FILTERING ID” is an identifier that identifies a set of voice commands in the web app 100b.
The voice command ID management table 15b stores a filtering ID and a voice command ID in association with one another. That is, the voice command ID management table 15b includes items, such as “FILTERING ID” and “VOICE COMMAND ID”.
“FILTERING ID” is an identifier that identifies a set of voice commands in a screen that is displayed. “VOICE COMMAND ID” is an identifier that identifies a voice command. Also, a voice pattern (not illustrated) is associated with “VOICE COMMAND ID” and thus stored.
Returning to the description of
For example, when power is turned on by the user and reception of a display screen is started, the display control section 17 outputs a startup instruction for stating up a voice recognition engine to the acquisition section 18. Also, the display control section 17 receives the filtering information, the display screen, and the screen ID from the terminal device 100 via the communication section 11. The display control section 17 stores the received filtering information in the filtering information storage section 15. Also, when the display control section 17 receives the display screen with which the screen ID is associated from the terminal device 100 via the communication section 11, the display control section 17 outputs the screen ID to the acquisition section 18 and also causes the display section 13 to display the display screen.
Furthermore, when a screen transition occurs for the display screen with which the screen ID is associated, the display control section 17 outputs the screen ID to the acquisition section 18 and causes the display section 13 to display the display screen also for the display screen and the screen ID after the transition in a similar manner. Note that, when the display control section 17 receives a display screen with which the screen ID is not associated, that is, for example, a display screen in a state where the web app 100b has not started up, the display control section 17 causes the display section 13 to display the received display screen.
When, during display of the display screen with which the screen ID is associated, the display screen is updated to a display screen including a voice command recognized in the display screen, the display control section 17 causes the display section 13 to display the updated display screen. That is, the display control section 17 displays, among the plurality of voice commands, a voice command that is associated with the acquired voice pattern on the display screen. Also, the display control section 17 determines whether or not the end information has been received from the terminal device 100 via the communication section 11. If the end information has not been received, the display control section 17 stands by for acquiring the sound information. If the end information has been received, the display control section 17 outputs an end instruction to the acquisition section 18.
When the startup instruction is input to the acquisition section 18 from the display control section 17, the acquisition section 18 starts up the voice recognition engine and starts acquiring sound information collected by the input section 12. The acquisition section 18 converts the acquired sound information to sound information that may be compared to the voice patterns stored in the filtering information storage section 15, using the voice recognition engine. That is, the acquisition section 18 recognizes the voice command. When the screen ID is input to the acquisition section 18 from the display control section 17, the acquisition section 18 refers to the filtering information storage section 15 and acquires one or more voice command IDs and voice patterns associated with the screen ID. The acquisition section 18 outputs the sound information after the conversion, the voice command ID, and the voice pattern to the comparison section 19. That is, the acquisition section 18 starts filtering of the acquired sound information using the filtering information. Also, when the end instruction is input to the acquisition section 18 from the display control section 17, the acquisition section 18 stops the voice recognition engine.
When the sound information after the conversion, the voice command ID, and the voice pattern are input to the comparison section 19 from the acquisition section 18, the comparison section 19 compares the sound information after the conversion and the voice pattern to one another. If the sound information after the conversion matches one of the one or more voice patterns, the comparison section 19 generates a comparison result including the voice command ID that corresponds to the matching voice pattern and indicating that the sound information after the conversion matches the voice pattern. If the sound information after the conversion does not match any of the one or more sound patterns, the comparison section 19 generates a comparison result indicating that the sound information after the conversion does not match the voice pattern. The comparison section 19 outputs the generated comparison result. That is, the comparison section 19 also serves as an output control section and transmits the generated comparison result to the terminal device 100 via the communication section 11.
In other words, the comparison section 19 determines whether or not the sound information after the conversion matches the filtering information. If the sound information after the conversion matches the filtering information, the comparison section 19 generates a comparison result including the voice command ID that corresponds to the matching voice pattern and indicating that the sound information after the conversion matches the filtering information, and transmits the generated comparison pattern to the terminal device 100. If the sound information after the conversion does not match the filtering information, the comparison section 19 generates a comparison result indicating that the sound information after the conversion does not match the filtering information and transmits the generated comparison result to the terminal device 100.
Also, if the generated comparison result is a comparison result indicating that the sound information after the conversion matches the filtering information, the comparison section 19 outputs, for example, a recognition sound to an earphone or the like (not illustrated). Furthermore, if the generated comparison result is a comparison result indicating that the sound information after the conversion does not match the filtering information, the comparison section 19 outputs, for example, voice saying “UNABLE TO RECOGNIZE” or the like to the earphone or the like (not illustrated). Note that the comparison section 19 may be configured so as not to output, if the generated comparison result is a comparison result indicating that the sound information after the conversion does not match the filtering information, a recognition sound or voice.
With reference to
As illustrated in
Subsequently, a configuration of the terminal device 100 will be described. As illustrated in
The first communication section 110 is realized by, for example, a communication module, such as a wireless LAN or the like, or the like. The first communication section 110 is a communication interface that is wirelessly coupled to the HMD 10 via, for example, Wi-Fi Direct (registered trademark) and conducts communication of information with the HMD 10. The first communication section 110 receives a comparison result from the HMD 10. The first communication section 110 outputs the received comparison result to the control section 130. Also, the first communication section 110 transmits the filtering information, the end information, the display screen, and the screen ID that have been input from the control section 130 to the HMD 10.
The second communication section 111 is realized by, for example, a communication module, such as a mobile phone line, such as third generation mobile communication system, a long term evolution (LTE), or the like, a wireless LAN, or the like. The second communication section 111 is a communication interface that is wirelessly coupled to the server 200 via the network N and conducts communication of information with the server 200. The second communication section 111 transmits a data acquisition instruction and a filtering information acquisition instruction that have been input from the control section 130 to the server 200 via the network N. Also, the second communication section 111 receives the AR contents that correspond to the data acquisition instruction and the filtering information that corresponds to the filtering information acquisition instruction from the server 200 via the network N. The second communication section 111 outputs the AR contents and the filtering information that have been received to the control section 130.
The display operation section 112 serves as a display device that displays various types of information and also as an input device that receives various types of operations from a user. For example, the display operation section 112 is realized as the display device by a liquid crystal display or the like. Also, for example, the display operation section 112 is realized as the input device by a touch panel or the like. That is, the display operation section 112 is an integration of the display device and the input device. The display operation section 112 outputs an operation input by the user as operation information to the control section 130. Note that the display operation section 112 may be configured to display a similar screen to the display screen that is displayed on the HMD 10, and to display a different screen from the display screen that is displayed on the HMD 10.
The storage section 120 is realized by, for example, a storage device, such as a semiconductor memory device, such as RAM, flash memory, or the like, a hard disk drive, an optical disk, or the like. The storage section 120 includes a filtering information storage section 121 and a voice command storage section 122. Also, the storage section 120 stores information that is used for processing in the control section 130.
The filtering information storage section 121 stores the filtering information acquired from the server 200. Note that the filtering information storage section 121 has a similar configuration to that of the filtering information storage section 15 of the HMD 10 and the description thereof will be omitted.
The voice command storage section 122 stores a voice command ID and a voice command in association with one another.
“VOICE COMMAND ID” is an identifier that identifies the voice command. “VOICE COMMAND ID” is information that indicates a command, such as, for example, “MENU DISPLAY”, “SELECT NUMBER 1”, or the like.
Returning to the description of
The execution section 131 executes an AR app, that is, the AR middle 100a and the web app 100b. For example, when the power of the terminal device 100 is turned on, the execution section 131 starts transmitting a display screen to the HMD 10. The AR middle 100a instructs, for example, based on the operation information input by the user from the display operation section 112, a startup of the web app 100b. When the filtering information is input to the AR middle 100a from the web app 100b, the AR middle 100a transmits the input filtering information to the HMD 10 via the first communication section 110. Also, the AR middle 100a transmits the display screen and the screen ID to the HMD 10 via the first communication section 110.
When the AR middle 100a receives a comparison result from the HMD 10 via the first communication section 110, the AR middle 100a executes processing in accordance with the comparison result. If the AR middle 100a receives a comparison result including the voice command ID and indicating that the sound information matches the voice pattern, the AR middle 100a refers to the voice command storage section 122 and determines whether or not the voice command that corresponds to the voice command ID is to be processed by the AR middle 100a. If the voice command is to be processed by the AR middle 100a, the AR middle 100a executes processing that corresponds to the voice command.
If the voice command is not to be processed by the AR middle 100a, the AR middle 100a outputs the voice command to the web app 100b. Note that the AR middle 100a may be configured, if the AR middle 100a receives a comparison result indicating that the sound information does not match any voice pattern, to cause a message indicating that it is unable to recognize voice to be displayed on the display screen and also not to perform any processing.
The AR middle 100a determines whether or not there is a screen transition for processing that corresponds to the voice command. If there is such a screen transition, the AR middle 100a transmits the screen ID of the display screen after the transition to the HMD 10 via the first communication section 110. If there is not such a screen transition, the AR middle 100a determines whether or not the web app 100b has ended.
If the web app 100b has not ended, the AR middle 100a stands by for receiving a comparison result from the HMD 10. If the web app 100b has ended, the AR middle 100a transmits end information to the HMD 10 via the first communication section 110.
The web app 100b starts up in accordance with a startup instruction from the AR middle 100a. When the web app 100b starts up, the web app 100b transmits a data acquisition instruction and a filtering information acquisition instruction to the server 200 via the second communication section 111 and the network N. The web app 100b acquires the AR contents that correspond to the data acquisition instruction and the filtering information that corresponds to the filtering information acquisition instruction from the server 200 via the second communication section 111 and the network N.
The web app 100b generates a display screen including the AR contents in cooperation with the AR middle 100a and transmits the generated display screen to the HMD 10 via the first communication section 110 to cause the HMD 10 to display the generated display screen. Also, the web app 100b outputs the acquired filtering information to the AR middle 100a. If the voice command is input to the web app 100b from the AR middle 100a, the web app 100b executes processing that corresponds to the voice command.
Next, an operation of the voice input support system 1 according to an embodiment will be described. Each of
For example, when power is turned on by the user and reception of a display screen is started, the display control section 17 of the HMD 10 outputs a startup instruction for starting up the voice recognition engine to the acquisition section 18. When the startup instruction is input to the acquisition section 18 from the display control section 17, the acquisition section 18 starts up the voice recognition engine and starts acquiring sound information collected by the input section 12 (Step S11).
For example, when the power of the terminal device 100 is turned on, the execution section 131 of the terminal device 100 starts transmitting the display screen to the HMD 10. The AR middle 100a that is executed by the execution section 131 instructs a startup of the web app 100b, for example, based on the operation information that has been input by the user from the display operation section 112 (Step S12). The web app 100b starts up in accordance with the startup instruction from the AR middle 100a (Step S13). When the web app 100b starts up, the web app 100b transmits a data acquisition instruction and a filtering information acquisition instruction to the server 200. The web app 100b acquires the AR contents that correspond to the data acquisition instruction and the filtering information that corresponds to the filtering information acquisition instruction from the server 200 (Step S14).
If the filtering information is input to the AR middle 100a from the web app 100b, the AR middle 100a transmits the input filtering information to the HMD 10 (Step S15). When the display control section 17 of the HMD 10 receives the filtering information, the display control section 17 stores the received filtering information in the filtering information storage section 15 (Step S16)
Also, the AR middle 100a of the terminal device 100 transmits the display screen and the screen ID to the HMD 10 (Step S17). The display control section 17 of the HMD 10 receives the display screen and the screen ID from the terminal device 100 (Step S18). When the display control section 17 receives the display screen and the screen ID, the display control section 17 outputs the screen ID to the acquisition section 18 and also causes the display section 13 to display the display screen. The acquisition section 18 refers to the filtering information storage section 15 and starts filtering the acquired sound information using the filtering information (Step S19). The acquisition section 18 determines whether or not the sound information has been acquired (Step S20). If the sound information has been acquired (YES in Step S20), the acquisition section 18 converts the acquired sound information to sound information that may be compared to voice patterns stored in the filtering information storage section 15, using the voice recognition engine. That is, the acquisition section 18 recognizes the voice command (Step S21). If the sound information has not been acquired (NO in Step S20), the acquisition section 18 causes the process to proceed to Step S32.
When the screen ID is input to the acquisition section 18 from the display control section 17, the acquisition section 18 refers to the filtering information storage section 15 and acquires one or more voice command IDs and voice patterns associated with the screen ID. The acquisition section 18 outputs the sound information after the conversion, the voice command ID, and the voice pattern to the comparison section 19. When the sound information after the conversion, the voice command ID, and the voice pattern are input to the comparison section 19 from the acquisition section 18, the comparison section 19 determines whether or not the sound information after the conversion matches the voice pattern, that is, the filtering information (Step S22).
If the sound information after the conversion matches the filtering information (YES in Step S22), the comparison section 19 transmits a comparison result including the voice command ID that corresponds to the matching voice pattern and indicating that the sound information after the conversion matches the filtering information to the terminal device 100 (Step S23). If the sound information after the conversion does not match the filtering information (NO in Step S22), the comparison section 19 transmits a comparison result indicating that the sound information after the conversion does not match the filtering information to the terminal device 100 and causes the process to proceed to Step S32.
The AR middle 100a of the terminal device 100 receives the comparison result including the voice command ID and indicating that the sound information after the conversion matches the filtering information from the HMD 10 (Step S24). When the AR middle 100a receives the comparison result including the voice command ID and indicating that the sound information after the conversion matches the filtering information, the AR middle 100a refers to the voice command storage section 122 and determines whether or not the voice command that corresponds to the voice command ID is to be processed by the AR middle 100a (Step S25). If the voice command that corresponds to the voice command ID is to be processed by the AR middle 100a (YES in Step S25), the AR middle 100a executes processing that corresponds to the voice command (Step S26).
If the voice command that corresponds to the voice command ID is not to be processed by the AR middle 100a (NO in Step S25), the AR middle 100a outputs the voice command to the web app 100b (Step S27). When the voice command is input to the web app 100b from the AR middle 100a, the web app 100b executes processing that corresponds to the voice command (Step S28).
The AR middle 100a determines whether or not there is a screen transition for processing that corresponds to the voice command (Step S29). If there is a screen transition (YES in Step S29), the AR middle 100a causes the process to return to Step S17 and transmits the screen ID of the display screen after the transition to the HMD 10. If there is not a screen transition (NO in Step S29), the AR middle 100a determines whether or not the web app 100b has ended (Step S30).
If the web app 100b has not ended (NO in Step S30), the AR middle 100a causes the process to return to Step S24 and stands by for receiving a comparison result from the HMD 10. If the web app 100b has ended (YES in Step S30), the AR middle 100a transmits the end information to the HMD 10 (Step S31).
The display control section 17 of the HMD 10 determines whether or not the HMD 10 has received the end information from the terminal device 100 (Step S32). If the HMD 10 has not received the end information (NO in Step S32), the display control section 17 causes the process to return to Step S20. If the HMD 10 has received the end information (YES in Step S32), the display control section 17 outputs an end instruction to the acquisition section 18. When the end instruction is input to the acquisition section 18 from the display control section 17, the acquisition section 18 stops the voice recognition engine to end voice input processing. Thus, the HMD 10 and the terminal device 100 may increase voice recognition accuracy.
Note that in the above-described embodiments, in the filtering information storage section 15, the screen ID management table 15a in which the screen ID and the filtering ID are associated with one another is used, but the filtering information storage section 15 is not limited thereto. For example, an app ID management table using, instead of “SCREEN ID”, “APP ID” that identifies the type of the web app 100b may be used.
Thus, when the HMD 10 acquires the sound information collected by the microphone, the HMD 10 refers to the storage section 14 that stores a plurality of voice patterns in association with image information and acquires a voice pattern associated with the image information displayed on the screen of a terminal. Also, the HMD 10 compares the acquired sound information and the acquired voice pattern to one another and outputs a comparison result. As a result, voice recognition accuracy may be increased.
Also, when the HMD 10 acquires the sound information collected by the microphone, the HMD 10 refers to the storage section 14 that stores each of the plurality of voice patterns in association with the corresponding app type and acquires the voice pattern associated with the app type displayed on the screen of the terminal. Also, the HMD 10 compares the acquired sound information and the acquired voice pattern to one another and outputs a comparison result. As a result, voice recognition accuracy may be increased.
The HMD 10 and the terminal device 100 further refer to the storage section 120 that stores each of the plurality of voice commands in association with the corresponding voice pattern and display a voice command, among the plurality of voice commands, which is associated with the acquired voice pattern, on the screen of the terminal. As a result, the user is able to check the input voice command.
The HMD 10 acquires the plurality of voice patterns and the image information or the plurality of voice patterns and the app type from the terminal device 100 and stores the plurality of voice patterns and the image information or the plurality of voice patterns and the app type in the storage section 14. As a result, a result of voice recognition may be filtered in accordance with the image information or the app type.
The HMD 10 includes a microphone, a display, and a storage section 14 that stores a voice pattern in association with each of pieces of image information which is displayed on the display. Also, the HMD 10 includes a control section that, when sound information collected by the microphone is acquired, refers to the storage section 14, acquires a voice pattern associated with the image information displayed on the display, and outputs a result of comparison between the acquired sound information and the acquired voice pattern. As a result, voice recognition accuracy may be increased.
Note that, in the above-described embodiments, the terminal device 100 and the HMD 10 have been described as a terminal device and a HMD that are worn by a user, but are not limited thereto. For example, sound recognition may be performed by the terminal device 100, which is, for example, a smartphone, without using the HMD 10.
Each component element of each section illustrated in the drawings may not be physically configured as illustrated in the drawings. That is, specific embodiments of disintegration and integration of each section are not limited to those illustrated in the drawings, and all or some of the sections may be disintegrated/integrated functionally or physically in an arbitrary unit in accordance with various loads, use conditions, and the like. For example, the acquisition section 18 and the comparison section 19 may be integrated. Also, the order of the respective steps illustrated in the drawings is not limited to the above-described order and, to the extent that there is no contradiction, the respective steps may be simultaneously performed and also may be performed in a different order.
Furthermore, the whole or a part of each processing function performed by each unit may be executed on a CPU (or a microcomputer, such as an MPU, a micro control unit (MCU), or the like). Needless to say, the whole or a part of each processing function may be executed on a program that is analyzed and executed by a CPU (or a microcomputer, such as an MPU, an MCU, or the like) or a hardware of a wired logic.
Incidentally, various types of processing described in the above-described embodiments may be realized by causing a computer to execute a program prepared in advance. Therefore, an example of a computer that executes a program having similar functions to those described in the above-described embodiments will be described below.
As illustrated in
A voice input support program having a similar function to that of each of the processing sections of the display control section 17, the acquisition section 18, and the comparison section 19 illustrated in
The CPU 301 reads each of programs stored in the flash memory 308, expands the program in the RAM 307, and then, executes the program, thereby performing various types of processing. The programs may cause the computer 300 to function as the display control section 17, the acquisition section 18, and the comparison section 19 illustrated in
Note that there may be a case where the above-described voice input support program is not stored in the flash memory 308. For example, a configuration in which the computer 300 reads a program stored in a computer readable storage medium from which the computer 300 may read data and execute the program may be employed. For example, a portable recording medium, such as CD-ROM, a DVD disk, universal serial bus (USB) memory, or the like, a semiconductor memory, such as flash memory or the like, a hard disk drive, or the like corresponds to the computer readable storage medium from which the computer 300 may read data. As another option, a configuration in which the voice input support program is stored in a unit coupled to a public line, the Internet, a LAN, or the like in advance and the computer 300 is configured to read the voice input support program from the unit to execute the voice input support program may be employed.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2016-058958 | Mar 2016 | JP | national |