This application is based on and claims priority under 35 USC 119 from Japanese Patent Application No. 2011-283452 filed Dec. 26, 2011.
The present invention relates to a voice analyzer.
According to an aspect of the invention, there is provided a voice analyzer including: an apparatus body; a strap that is connected to the apparatus body and is used to hang the apparatus body from a neck of a user; a first voice acquisition unit that is provided in the strap or the apparatus body in order to acquire a voice; a second voice acquisition unit that is provided at a position where a distance of a sound wave propagation path from a mouth of the user is smaller than a distance of a sound wave propagation path from the mouth of the user to the first voice acquisition unit when the strap is hung on the neck of the user and that acquires a voice; and an identification unit that identifies a sound, in which first sound pressure that is sound pressure of a voice acquired by the first voice acquisition unit is larger by a predetermined value or more than second sound pressure that is sound pressure of a voice acquired by the second voice acquisition unit, on the basis of a result of comparison between the first sound pressure and the second sound pressure.
Exemplary embodiments of the present invention will be described in detail based on the following figures, wherein:
Hereinafter, an exemplary embodiment of the invention will be described in detail with reference to the accompanying drawings.
Example of the System Configuration
As shown in
The terminal apparatus 10 includes, as voice acquisition units, at least a pair of microphones (first and second microphones 11 and 12) and a pair of amplifiers (first and second amplifiers 13 and 14). In addition, the terminal apparatus 10 includes a voice analysis unit 15 that analyzes an acquired voice and a data transmission unit 16 that transmits an analysis result to the host apparatus 20, and further includes a power supply unit 17.
The first and second microphones 11 and 12 are disposed at different positions, at which distances of sound wave propagation paths from the mouth (speaking portion) of the wearer (hereinafter, simply referred to as “distances”) are different. Here, it is assumed that the first microphone 11 is disposed at the position (for example, about 35 cm) far from the mouth (speaking portion) of the wearer and the second microphone 12 is disposed at the position (for example, about 10 cm) near the mouth (speaking portion) of the wearer. Various types of known microphones, such as a dynamic type microphone and a capacitor type microphone, may be used as the first and second microphones 11 and 12 in the present exemplary embodiment. In particular, it is preferable to use a non-directional MEMS (Micro Electro Mechanical Systems) type microphone.
The first and second amplifiers 13 and 14 amplify electric signals (voice signals) that the first and second microphones 11 and 12 output according to the acquired voice. Known operational amplifiers or the like may be used as the first and second amplifiers 13 and 14 in the present exemplary embodiment.
The voice analysis unit 15 analyzes the voice signals output from the first and second amplifiers 13 and 14. In addition, the voice analysis unit 15 determines whether the voice acquired by the first and second microphones 11 and 12 is a voice from the wearer, who wears the terminal apparatus 10, or voices from others. That is, the voice analysis unit 15 functions as an identification unit that identifies a speaker of the voice on the basis of voices acquired by the first and second microphones 11 and 12. Details of specific processing for identification of a speaker will be described later.
The data transmission unit 16 transmits the acquired data including the analysis result of the voice analysis unit 15 and the ID of the terminal apparatus 10 to the host apparatus through the wireless communication line. As the information transmitted to the host apparatus 20, for example, information regarding the voice acquisition time, sound pressure of the acquired voice, and the like of the first and second microphones 11 and 12 may be included in addition to the analysis result according to processing performed in the host apparatus 20. In addition, a data storage unit that stores the analysis result of the voice analysis unit 15 may be provided in the terminal apparatus 10, and data stored for a certain period of time may be collectively transmitted. The data may be transmitted through a cable line.
The power supply unit 17 supplies electric power to the first and second microphones 11 and 12, the first and second amplifiers 13 and 14, the voice analysis unit 15, and the data transmission unit 16. As a power supply, it is possible to use known power supplies, such as a dry battery and a rechargeable battery, for example. In addition, the power supply unit 17 includes known circuits, such as a voltage conversion circuit and a charging control circuit, when necessary.
The host apparatus 20 includes a data receiving unit 21 that receives the data transmitted from the terminal apparatus 10, a data storage unit 22 that stores the received data, a data analysis unit 23 that analyzes the stored data, and an output unit 24 that outputs an analysis result. The host apparatus 20 is realized by an information processing apparatus, such as a personal computer, for example. Moreover, in the present exemplary embodiment, the plural terminal apparatuses 10 are used as described above, and the host apparatus 20 receives the data from each of the plural terminal apparatuses 10.
The data receiving unit 21 corresponds to the wireless communication line described above, and receives the data from each terminal apparatus 10 and transmits it to the data storage unit 22. The data storage unit 22 is realized by a memory of a magnetic disk device of a personal computer, for example, and stores the received data acquired from the data receiving unit 21 for each speaker. Here, identification of a speaker is performed on the basis of a terminal ID transmitted from the terminal apparatus 10 and a collation of a speaker name and a terminal ID registered in the host apparatus 20 in advance. In addition, instead of the terminal ID, a wearer's name may be transmitted from the terminal apparatus 10.
The data analysis unit 23 is realized by a CPU program-controlled by a personal computer, for example, and analyzes the data stored in the data storage unit 22. As the specific analysis content and analysis method, various kinds of content and methods may be adopted depending on the purpose or aspect of use of the system according to the present exemplary embodiment. For example, the frequency of conversation between wearers of the terminal apparatus 10 or the tendencies of a conversation partner of each wearer is analyzed, or the relationship of speakers in a conversation is estimated from the information regarding the length or sound pressure of each voice in the conversation.
The output unit 24 outputs an analysis result of the data analysis unit 23 or performs output based on the analysis result. As the output unit, various kinds of units including display of a display device, printout using a printer, and voice output may be adopted according to the purpose or aspect of use of the system, the content or format of an analysis result, and the like.
Example of the configuration of a terminal apparatus
As described above, the terminal apparatus 10 is used in a state worn by each user. The terminal apparatus 10 in the present exemplary embodiment is configured to include an apparatus body 30 and a strap 40 connected to the apparatus body 30 so that the user can wear the terminal apparatus 10, as shown in
The apparatus body 30 is configured such that at least circuits for realizing the first and second amplifiers 13 and 14, the voice analysis unit 15, the data transmission unit 16, and the power supply unit 17 and a power supply (battery) of the power supply unit 17 are housed in a rectangular parallelepiped thin case 31 formed of metal, resin, or the like. A pocket through which an ID card, on which ID information such as the name or team of the wearer is displayed, is inserted may be provided in the case 31. In addition, such ID information or the like may be printed on the surface of the case 31, or a seal on which the ID information or the like is described may be attached to the surface of the case 31.
The first and second microphones 11 and 12 are provided in the strap 40 (hereinafter, referred to as microphones 11 and 12 when the first and second microphones 11 and 12 are not distinguished from each other). The microphones 11 and 12 are connected to the first and second amplifiers 13 and 14 housed in the apparatus body 30 by cables (electric wires or the like) passing through the inside of the strap 40. As materials of the strap 40, it is possible to use known various materials, such as leather, synthetic leather, cotton, other natural fibers, synthetic fiber using resin, and metal. In addition, coating processing using silicon resin, fluorine resin, or the like may be performed.
The strap 40 has a cylindrical structure, and the microphones 11 and 12 are housed inside the strap 40. By providing the microphones 11 and 12 inside the strap 40, it is possible to prevent damage or contamination of the microphones 11 and 12, and it is suppressed that a participant in a dialogue is aware of the existence of the microphones 11 and 12. In addition, the first microphone 11 disposed at the position far from the mouth (speaking portion) of the wearer may be provided in the apparatus body 30 so as to be housed in the case 31. In the present exemplary embodiment, a case where the first microphone 11 is provided in the strap 40 will be described as an example.
Referring to
The second microphone 12 which is an example of the second voice acquisition unit is provided at the position (for example, a position of about 25 cm to 35 cm from the center of the apparatus body 30) distant from the end of the strap 40 connected to the apparatus body 30. Accordingly, in a state where the wearer wears the strap 40 on the neck so that the apparatus body 30 is hung from the neck, the second microphone 12 is located on the neck (for example, a position equivalent to the collarbone) of the wearer and is disposed at the position distant from the mouth (speaking portion) of the wearer by about 10 cm to 20 cm.
In addition, the terminal apparatus 10 in the present exemplary embodiment is not limited to the configuration shown in
In addition, the configuration of the apparatus body 30 is not limited to the configuration shown in
In addition, the microphones 11 and 12 and the apparatus body 30 (or the voice analysis unit 15) may be wirelessly connected to each other instead of being connected using a cable. Although the first and second amplifiers 13 and 14, the voice analysis unit 15, the data transmission unit 16, and the power supply unit 17 are housed in the single case 31 in the above example of the configuration, they may be grouped into plural parts. For example, the power supply unit 17 may be connected to an external power supply without being housed in the case 31.
Identification of a speaker (wearer and others) based on non-linguistic information of acquired voice
Next, a method of identifying a speaker in the present exemplary embodiment will be described.
The system according to the present exemplary embodiment identifies a voice of the wearer of the terminal apparatus 10 or voices of others using the voice information acquired by the two microphones 11 and 12 provided in the terminal apparatus 10. In other words, in the present exemplary embodiment, it is determined whether the speaker of the acquired voice is a wearer or others. In addition, in the present exemplary embodiment, speaker identification is performed on the basis of the non-linguistic information, such as sound pressure (volume input to the microphones 11 and 12) instead of the linguistic information acquired using morphological analysis or dictionary information of the information regarding the acquired voice. That is, a speaker of the voice is identified from the speaking situation specified by the non-linguistic information instead of the content of speaking specified by the linguistic information.
As described with reference to
On the other hand, assuming that the mouth (speaking portion) of a person other than the wearer (another person) is a sound source, the distance between the first microphone 11 and the sound source and the distance between the second microphone 12 and the sound source do not change greatly since another person is separated from the wearer. Although there may be a difference between both the distances depending on the position of another person with respect to the wearer, the distance between the first microphone 11 and the sound source is not several times the distance between the second microphone 12 and the sound source except for the case when the mouth (speaking portion) of the wearer is a sound source. Therefore, for the voice of another person, the sound pressure of the acquired voice in the first microphone 11 is not largely different from the sound pressure of the acquired voice in the second microphone 12 as in the case of the voice of the wearer.
In the relationship shown in
La1>La2(La1≅1.5×La2˜4×La2)
Lb1≅Lb2
As described above, the sound pressure distance-decreases with the distance between each of the microphones 11 and 12 and the sound source. In
As described with reference to
Identification of the Acquired Voice Including the Collision Sound
As described above, the user of the terminal apparatus 10 wears the strap 40 on the neck so that the apparatus body 30 is hung from the neck. In addition, for example, when a user moves in a state where the user wears the terminal apparatus 10 on the neck, the terminal apparatus 10 shakes and accordingly, the apparatus body 30 of the terminal apparatus 10 may collide with other members. Thus, when the apparatus body 30 collides with other members, collision sound is generated. For example, when the apparatus body 30 is hit by a part of the body of the user of the terminal apparatus 10, a desk, or an ID card or a mobile phone hung on the neck of the user excluding the terminal apparatus 10, collision sound is generated. In addition, this collision sound and the voice of the wearer or the voices of others are acquired as acquired voices by the microphones 11 and 12.
When the microphones 11 and 12 acquire the collision sound generated when the apparatus body 30 collides with other members, the voice of the wearer in the acquired voices may be recognized as the voices of others.
Hereinafter, the relationship between the acquisition of collision sound and the recognition of speaking of the wearer as speaking of others will be described.
In the terminal apparatus 10 of the present exemplary embodiment, the size of the collision sound acquired by the first microphone 11 is larger than the size of the collision sound acquired by the second microphone 12. More specifically, the collision sound acquired by the first microphone 11 is generated for a short time (for example, about 0.3 ms) compared with the voice.
For example, in
Moreover, in
Now, a case where the collision sound acquired by the first microphone 11 becomes larger than the collision sound acquired by the second microphone 12 will be described in more detail.
In the relationship shown in
Ls1<Ls2(2.5×Ls1˜3.5×Ls1≅Ls2)
In addition, when the first microphone 11 is provided in the apparatus body 30, the distance Ls1 is further reduced.
As described above, the sound pressure distance-decreases with the distance between each of the microphones 11 and 12 and the sound source. In
As shown in
In addition, since there are gestures in many cases when the wearer speaks, collision sound caused by the apparatus body 30 is generated more easily. In this case, accordingly, the frequency, in which a section in which the wearer speaks is determined as a section in which others speak, increases.
In the present exemplary embodiment, therefore, it is determined whether or not the acquired voice includes collision sound by adopting the following configuration, so that the influence of the collision sound on distinction between the voice of the wearer and the voices of others is suppressed. Specifically, in the present exemplary embodiment, a threshold value (second threshold value) of the ratio between the sound pressure of the second microphone 12 and the sound pressure of the first microphone 11 is set.
This uses the fact that the ratio between the sound pressure of the second microphone 12 and the sound pressure of the first microphone 11 tends to be different between the acquired voice including the collision sound and the acquired voice not including the collision sound.
More specifically, as described with reference to
Therefore, an appropriate value between the sound pressure ratio in the voices of others and the sound pressure ratio in the acquired voice when the collision sound is generated is set as a second threshold value. In addition, the voice with a sound pressure ratio smaller than the second threshold value is determined as the acquired voice including the collision sound, and the voice with a sound pressure ratio larger than the second threshold value is determined as the acquired voice not including the collision sound. In addition, in the present exemplary embodiment, when determination as the acquired voice including the collision sound is made, distinction between the voice of the wearer and the voices of others is not performed.
In the example shown in
In addition, the first and second threshold values are just examples, and may be changed according to the environment where the system of the present exemplary embodiment is used.
Incidentally, not only the voice and the collision sound but also the sound (environmental sound) of an environment in which the terminal apparatus 10 is used, such as operating sound of the air conditioning and footsteps associated with walking of the wearer, is included in the voices acquired by the microphones 11 and 12. The relationship of the distance between the sound source of this environmental sound and each of the microphones 11 and 12 is similar to that in the case of voices of others. That is, according to the example shown in
Example of an Operation of a Terminal Apparatus
As shown in
The voice analysis unit 15 performs filtering processing on the signal amplified by each of the first and second amplifiers 13 and 14 to remove environmental sound components from the signal (step 1003). Then, the voice analysis unit 15 calculates the average sound pressure in the voice acquired by each of the microphones 11 and 12 for the signal, from which noise components are removed, every fixed time unit (for example, few tenths of a second to few hundredths of a second) (step 1004).
When there is a gain of the average sound pressure, which is calculated in step 1004, in each of the microphones 11 and 12 (Yes in step 1005), the voice analysis unit 15 determines that there is a voice (speaking has been done). Then, the voice analysis unit 15 calculates the ratio (sound pressure ratio) between the average sound pressure in the first microphone 11 and the average sound pressure in the second microphone 12 (step 1006).
Then, when the sound pressure ratio calculated in step 1006 is larger than the first threshold value (Yes in step 1007), the voice analysis unit 15 determines that the voice is from the wearer (step 1008). In addition, when the sound pressure ratio calculated in step 1006 is smaller than the first threshold value (No in step 1007) and the sound pressure ratio calculated in step 1006 is larger than the second threshold value (Yes in step 1009), the voice analysis unit 15 determines that the voices are voices from others (step 1010). In addition, when the sound pressure ratio calculated in step 1006 is smaller than the first threshold value (No in step 1007) and the sound pressure ratio calculated in step 1006 is smaller than the second threshold value (No in step 1009), the voice analysis unit 15 determines that the acquired sound includes a collision sound. That is, the voice analysis unit 15 recognizes the acquired sound including the collision sound as noise. In addition, in the present exemplary embodiment, when determination as the acquired sound including the collision sound is made, the voice analysis unit 15 does not perform distinction between the voice of the wearer and the voices of others as described above.
In addition, when there is no gain of the average sound pressure in each of the microphones 11 and 12 calculated in step 1004 (No in step 1005), the voice analysis unit 15 determines that there is no voice (speaking has not been performed) (step 1011).
Then, the voice analysis unit (identification unit) 15 transmits the information (information regarding whether or not there is a voice and information regarding a speaker) obtained by the processing in steps 1004 to 1011, as an analysis result, to the host apparatus 20 through the data transmission unit 16 (step 1012). The length of speaking time of each speaker (wearer or another person), the value of the gain of average sound pressure, and other additional information items may be transmitted to the host apparatus 20 together with the analysis result. In this case, when No is determined in step 1009, that is, when it is determined that the acquired voice includes collision sound, the voice analysis unit 15 transmits the analysis result without identifying the speaker.
In addition, in the present exemplary embodiment, determination regarding whether the voice is a voice from the wearer or a voice from another person is performed by comparing the sound pressure of the first microphone 11 with the sound pressure of the second microphone 12. In the present exemplary embodiment, any kind of speaker identification may be performed if it is performed on the basis of the non-linguistic information extracted from the voice signals themselves acquired by the microphones 11 and 12, without being limited to the comparison of sound pressure.
For example, it is also possible to compare the voice acquisition time (output time of a voice signal) in the first microphone 11 with the voice acquisition time in the second microphone 12.
In this case, since there is a large difference between the distance from the mouth (speaking portion) of the wearer to the first microphone 11 and the distance from the mouth (speaking portion) of the wearer to the second microphone 12, a difference of the voice acquisition time occurs to some extent for the voice of the wearer. On the other hand, since there is a small difference between the distance from the mouth (speaking portion) of the wearer to the first microphone 11 and the distance from the mouth (speaking portion) of the wearer to the second microphone 12, an even smaller difference of voice acquisition time occurs for the voice of another person than the case of the voice of the wearer. Therefore, it is possible to set a first threshold value of the time difference of voice acquisition time and to determine that the voice is from the wearer when the time difference of voice acquisition time is larger than the first threshold value and determine that the voice is from another person when the time difference of voice acquisition time is smaller than the first threshold value.
Moreover, when the voice acquisition time in the first microphone 11 is compared with the voice acquisition time in the second microphone 12, there is a certain amount of difference (time difference) in the voice acquisition time of the acquired voice including the collision sound because the difference between the distance from the apparatus body 30 that generates the collision sound to the first microphone 11 and the distance from the apparatus body 30 that generates the collision sound to the second microphone 12 is large. More specifically, the voice acquisition time of the first microphone 11 is earlier than the voice acquisition time of the second microphone 12. On the other hand, in the case of the voice of the wearer or the voices of others not including the collision sound, the voice acquisition time of the first microphone 11 is later than the voice acquisition time of the second microphone 12, or the voice acquisition time of the first microphone 11 is almost the same as the voice acquisition time of the second microphone 12. Therefore, it is possible to set a second threshold value of the time difference of voice acquisition time and to determine that the voice with the time difference of voice acquisition time, which is smaller than the second threshold value, is the acquired voice including the collision sound and the voice with the time difference of voice acquisition time, which is larger than the second threshold value, is the acquired voice not including the collision sound.
Operation Example of the Voice Analysis Unit 15 That Acquires the Voice Including the Collision Sound
Here, an operation example of the voice analysis unit 15 when the voice including the collision sound is acquired will be described.
First, the case where the voice analysis unit 15 does not determine whether or not the acquired voice includes the collision sound unlike the system of the present exemplary embodiment will be described. In this case, when the voice analysis unit 15 analyzes the acquired voice when the collision sound is generated in a section in which the wearer speaks, the analysis result is shown in
On the other hand, when the voice analysis unit 15 of the present exemplary embodiment determines whether or not the acquired voice includes the collision sound, the analysis result is shown in
Application example of a system and functions of a host apparatus
In the system according to the present exemplary embodiment, information regarding the speaking (hereinafter, referred to as speaking information) obtained as described above by the plural terminal apparatuses 10 are collected in the host apparatus 20. Using the information acquired from the plural terminal apparatuses 10, the host apparatus 20 performs various analyses according to the purpose or aspect of use of the system. Hereinafter, an example will be described in which the present exemplary embodiment is used as a system that acquires the information regarding communication of plural wearers.
As shown in
The speaking information is separately transmitted from the terminal apparatuses 10A and 10B to the host apparatus 20. In this case, identification results of a speaker (wearer and another person) in the speaking information acquired from the terminal apparatus 10A and the speaking information acquired from the terminal apparatus 10B are opposite as shown in
In this application example, the host apparatus 20 includes a conversation information detecting section 201 that detects the speaking information (hereinafter, referred to as conversation information) from the terminal apparatus 10 of the wearer in conversation, among the speaking information items acquired from the terminal apparatus 10, and a conversation information analysis section 202 that analyzes the detected conversation information. The conversation information detecting section 201 and the conversation information analysis section 202 are realized as a function of the data analysis unit 23.
Also from the terminal apparatus 10 other than the terminal apparatuses 10A and 10B, the speaking information is transmitted to the host apparatus 20. The speaking information from each terminal apparatus 10 which is received by the data receiving unit 21 is stored in the data storage unit 22. In addition, the conversation information detecting section 201 of the data analysis unit 23 reads the speaking information of each terminal apparatus 10 stored in the data storage unit 22 and detects the conversation information which is the speaking information related to a specific conversation.
As shown in
In addition, the conditions required when the conversation information detecting section 201 detects the conversation information related to a specific conversation from the speaking information of the plural terminal apparatuses 10 are not limited to the relationship shown in
In addition, although the case where two wearers each of whom wears the terminal apparatus 10 have a conversation is shown in the above example, the number of persons participating in a conversation is not limited to two persons. When three or more wearers have a conversation, the terminal apparatus 10 worn by each wearer recognizes the voice of the wearer of the host apparatus as the voice of the wearer and distinguishes it from the voices of others (two or more persons). However, the information showing the speaking situation, such as speaking time or speaker change timing, is similar among the acquired information in each terminal apparatus 10. Therefore, similar to the case where two persons have a conversation, the conversation information detecting section 201 detects the speaking information acquired from the terminal apparatuses 10 of wearers participating in the same conversation and distinguishes it from the speaking information acquired from the terminal apparatuses 10 of wearers not participating in the conversation.
Then, the conversation information analysis section 202 analyzes the conversation information detected by the conversation information detecting section 201 and extracts the features of the conversation. In the present exemplary embodiment, as a specific example, the features of the conversation are extracted on the basis of three kinds of criteria for evaluation, that is, the degree of interaction, the degree of listening, and the degree of conversation activity. Here, the degree of interaction is assumed to indicate the balance of the speaking frequency of a conversation participant. The degree of listening is assumed to indicate the degree of listening to others in each conversation participant. The degree of conversation activity is assumed to indicate the density of speaking in the entire conversation.
The degree of interaction is specified by the number of times of speaker changes during the conversation and a variation in time taken until the speaker is changed (time for which one speaker speaks continuously). This maybe obtained from the number of times of speaker changes and the time when the speaker change occurs in the conversation information in a fixed period. In addition, it is assumed that the value (level) of the degree of interaction increases as the number of times of speaker changes increases, that is, as a variation in continuous speaking time of each speaker decreases. This criterion for evaluation is common in all conversation information items (speaking information of each terminal apparatus 10) related to the same conversation.
The degree of listening is specified by the ratio of speaking time of each conversation participant and speaking time of others in conversation information. For example, in the following expression, it is assumed that the value (level) of the degree of listening increases as the value of speaking time of others increases.
Degree of listening=(speaking time of others)/(speaking time of a wearer)
This criterion for evaluation differs with the speaking information acquired from the terminal apparatus 10 of each conversation participant even in the conversation information related to the same conversation.
The degree of conversation activity is an index showing the so-called excitement of the conversation, and is specified by the ratio of silence time (time for which none of conversation participants speak) to the total conversation time. It is assumed that the value (level) of the degree of conversation activity increases as the total silence time becomes short (which means that any one of conversation participants speaks in the conversation). This criterion for evaluation is common in all conversation information items (speaking information of each terminal apparatus 10) related to the same conversation.
As described above, the features of a conversation related to the conversation information are extracted by the conversation information analysis of the conversation information analysis section 202. In addition, the way of participation of each participant in the conversation is specified by performing the analysis as described above. In addition, the above criteria for evaluation are just examples showing the features of a conversation, and criteria for evaluation according to the purpose or aspect of use of the system of the present exemplary embodiment may be set by adopting other evaluation criteria or giving a weighting to each item.
The tendency of communication in a group to which the wearer of the terminal apparatus 10 belongs may be analyzed by performing the above analysis for various kinds of conversation information detected by the conversation information detecting section 201 among the speaking information items stored in the data storage unit 22. Specifically, the tendencies of conversation in a wearer's group may be determined by checking the number of conversation participants, conversation time, correlation between the values, such as the degree of interaction and the degree of conversation activity, and the occurrence frequency of conversation, for example.
In addition, the communication tendency of each wearer may be analyzed by performing the analysis as described above for plural conversation information items of a specific wearer. The way of participation of a specific wearer in the conversation may have a certain tendency according to the conditions, such as a conversation partner or the number of conversation participants. Therefore, it may be expected that the features, such as “the speaking level increases in a conversation with a specific partner” or “the degree of listening becomes large if the number of conversation participants increases”, are detected by examining the plural conversation information items in a specific wearer.
In addition, the speaking information identification processing and the conversation information analysis processing described above just show the application example of the system according to the present exemplary embodiment, and do not limit the purpose or aspect of use of the system according to the present exemplary embodiment, the function of the host apparatus 20, and the like. The processing function for executing various kinds of analyses and examinations for the speaking information acquired by the terminal apparatus 10 according to the present exemplary embodiment may be realized as a function of the host apparatus 20.
In the above explanation, the voice analysis unit 15 determines whether the acquired voice is a voice of the wearer or voices of others and then determines whether or not the acquired voice includes the collision sound. However, the invention is not limited to this as long as a configuration is adopted in which it is determined whether the acquired voice is a voice of the wearer or voices of others and it is determined whether or not the acquired voice includes the collision sound. For example, it is also possible to adopt a configuration in which it is determined whether or not the acquired voice includes the collision sound and then it is determined whether the acquired voice is a voice of the wearer or voices of others.
In addition, in the above explanation, when the voice analysis unit 15 determines that the acquired voice includes the collision sound, distinction between the voice of the wearer and the voices of others is not performed. However, the invention is not limited to this. For example, it is also possible to adopt a configuration in which the voice analysis unit 15 determines that the acquired voice includes the collision sound and then the voice analysis unit 15 removes noise from the voices acquired by the first and second microphones 11 and 12 (performs filtering processing) and also determines that the voice of the wearer has been acquired at the acquisition time of this acquired voice. In this case, determining the acquired voice, which is the voice of the wearer, to be the voices of others is suppressed.
The foregoing description of the exemplary embodiments of the invention has been provided for the purpose of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention is defined by the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2011-283452 | Dec 2011 | JP | national |