1. Field of the Invention
The present invention relates to an information transmission device, which is installed on a robot or a computer and performs an information transmission between a person.
2. Description of Relevant Art
Conventionally, a switch or keyboard operation, a voice input/output, and an image display have been used for an information transmission between a person and a machine. These tools are sufficient for transmitting information that can be represented by a symbol or a word, but other types of information has not supposed to be transferred.
On the contrary, the information transmission between a machine and a person should be easy, accurate, and friendly, in preparation for the expected increasing of the contact between a machine and a person in a future. For this purpose, it is important to transfer not only information liken a symbol or a word but other types of information like emotion.
For exchanging information between a machine and a person, means for transmitting information from a person to a machine and means for transmitting information from a machine to a person are required. For expressing an internal state by latter means, the internal state has been expressed by adding prosody to synthetic voice or by providing a quasi face with emotional looking on a machine or by combining these visual and auditory information.
In the case of the machine interface apparatus disclosed in Japanese unexamined patent publication JP H06-139044, for example, an emotional parameter of an agent changes in accordance with a result of a task or with words addressed by a user. Then, a natural language, which was selected based on the emotional parameter, is provided to a user as a voice message. Additionally, the image corresponding to the selected natural language is displayed.
In the case of the invention disclosed in Japanese unexamined patent publication JP2002-66155, a feeling value of a robot changes when words are addressed by a user or the robot is touched by a user. Herewith, the robot utters a reply-sound corresponding to the feeling value and changes the eye color thereof to the color corresponding to the feeling value.
In the case of the invention disclosed in Japanese unexamined patent publication JP2003-84800, a voice message with an emotion is synthesized and is sounded in combination with a light of LED corresponding to the message with an emotion.
Here, for performing a friendly information transfer between a machine and a human, it is important that a machine recognizes an emotion of a person and a person recognizes an internal state of a machine. However, all of the above described inventions are focused on the internal state of the machine, and none of the above described inventions have any consideration of an emotion of others (person). Therefore, an information transmission device which enables the friendly information transmission between a machine and a human has been required.
The present invention relates to an information transmission device which analyzes a diction of a speaker and provides an utterance in accordance with the diction of the speaker. This information transmission device includes a microphone detecting a sound signal of the speaker, a feature extraction unit extracting at least one feature value of the diction of the speaker based on the sound signal detected by the microphone, a voice synthesis unit synthesizes a voice signal to be uttered so that the voice signal has the same feature value as the diction of the speaker, based on the feature value extracted by the feature extraction unit, and a voice output unit performing an utterance based on the voice signal synthesized by the voice synthesis unit.
According to this information transmission device, a voice signal to be uttered from the voice output unit is modulated by the voice synthesis unit so that the voice signal has the same feature value as the diction of other person (speaker). That is, since the utterance from the information transmission device becomes similar to the utterance of the speaker, the communication as if the device recognizes an emotion of the speaker can be realized.
In the case of a person who speaks slowly, such as an elderly person etc., since the information transmission device utters slowly, an elderly person can catch the utterance easily.
In the case of an impatient person who speaks rapidly, the information transmission device can rapidly utter words by using the utterance speed as a feature value. Thereby, since the diction of the information transmission device can agree with the diction of other person and the tempo of utterance is not interrupted, the intimate communication other than emotional communication can also be performed easily.
The information transmission device of the present invention may include a voice recognition unit, which recognizes a phoneme from the sound signal detected by the microphone by comparison to a sound model of a phoneme memorized beforehand. In this case, the feature extraction unit extracts the feature value based on the phoneme recognized by the voice recognition unit.
In the present invention, furthermore, the feature extraction unit may extract at least one of a sound pressure of the sound signal and a pitch of the sound signal as the feature value. In the present invention, additionally, the feature extraction unit may extract a harmonic structure after the frequency analysis of the sound signal, and may regard the fundamental frequency of the harmonic structure as the pitch, and regard the pitch as the feature value.
In the present invention, still furthermore, the voice synthesis unit has a wave-form template database in which a phoneme and a voice waveform are correlated. In this case, the voice synthesis unit performs a readout of each of the voice waveform corresponding to each phoneme of a phoneme sequence to be uttered, and performs the modulation of the voice waveform based on the feature value to synthesize the sound signal.
In the present invention, additionally, the information transmission device may include an emotion estimation part, which computes at least one feature quantity to be used for the estimation of the emotion from the feature value and estimates the emotion of the speaker based on at least one feature quantity, and a color output part, which indicates a color corresponding to the emotion estimated by the emotion estimation part so that the indication of the color is synchronized with the output of the voice from the voice output unit. In this case, since the color corresponding to the emotion of other person can be indicated, the internal state thereof can be transferred to other person clearly.
For the estimation of the emotion, it is preferable that the emotion estimation part has a first emotion database, in which the relations between at least one feature quantity, a type of the emotion, and a phoneme or a phoneme sequence, are recorded. In this case, the emotion estimation part estimates the emotion by such a way that computing at least one feature quantity for each phoneme or phoneme sequence which were extracted by the voice recognition unit, comparing the computed at least one feature quantities with feature quantities in the first emotion database, finding the closest one, and referring the corresponding emotion.
In the present invention, additionally, the emotion estimation part may have a second emotion database, in which the relation between at least one feature quantity and the type of emotion is recorded. In this case, the emotion of the speaker can be estimated by finding an emotion in the second emotion database which has the closest feature quantity to the computed at least one feature quantity from the feature value.
In the present invention, furthermore, the second emotion database, which stores the correlation between the emotion and at least one feature quantity, may be provided. Here, the correlation is obtained as a result of the learning of a three-layer perception using the computed feature quantity, which is obtained about each emotion from at least one utterance detected by the microphone.
In the present invention, additionally, the information transmission device may include an emotion input part, to which the emotion of the speaker is inputted, e.g. by himself, and a second color output part, which indicates a color corresponding to the emotion inputted through the emotion input part so that the indication of the color is synchronized with the output of the voice from the voice output unit.
According to this information transmission device, an intimate communication can be achieved by changing the color of the apparatus according to the user's operation, depending on a situation.
According to the present invention, since the information transmission device can provide an utterance in compliance with the diction of the speaker, an intimate communication between the device and a person can be achieved.
Next, preferred embodiments of the present invention will be explained in detail with reference to the attached drawings.
An information transmission device 1 of the present embodiment is an apparatus which analyzes a diction of a person (speaker) and utters words in accordance with the diction of the speaker. Additionally, the information transmission device 1 expresses an internal state thereof by changing the color, e.g. the color of a body, a head etc., at the time of utterance. Here, the internal state of the information transmission device 1 varies in accordance with the diction of the speaker.
The information transmission device 1 is installed on a robot or home electric appliances and has a conversation with a person. Classically, the information transmission device 1 can be represented by using a general-purpose computer having a CPU (Central Processing Unit), a recording unit, an input device including a microphone, and an output device such as speaker. The function of the information transmission device 1 can be realized by running a program stored in the recording unit by CPU.
As shown in
Microphone M
The microphone M is a device for detecting a sound within a surrounding area of the information transmission device 1. The microphone M detects a voice of a person (speaker) as sound signal and supplies sound signal to the feature extraction unit 10.
Feature Extraction Unit 10
The feature extraction unit 10 is a unit for extracting a feature from a voice (sound signal) of a speaker. In this embodiment, the feature extraction unit 10 extracts sound pressure data, pitch data, and phoneme data as a feature value. The feature extraction unit 10 includes a sound pressure analyzer 11, a frequency analyzer 12, a peak extractor 13, a harmonic structure extractor 14, and a pitch extractor 15.
Sound Pressure Analyzer 11
The sound pressure analyzer 11 computes an energy value of sound signal entered from the microphone M at each predetermined shift interval, e.g. 10 [msec]. Then, the sound pressure analyzer 11 calculates an average of energy values of some shifts which correspond to a phoneme duration. Here, duration of the phoneme is acquired from the voice recognition unit 20.
As shown in
The sound pressure data is supplied to the voice synthesis unit 30 and the color generation unit 50 together with a value of the sound pressure, a starting time tn, and a duration.
Frequency Analyzer 12
In the frequency analyzer 12, as shown in
Peak Extractor 13
The peak extractor 13 extracts a series of peaks from spectrum SP. The extraction of the peak is performed by extracting local peaks of spectrum or by using a spectrum subtraction method (S. F. Boll, A spectral subtraction algorithm for suppression of acoustic noise in speech, Proceedings of 1979 International conference on Acoustics, Speech, and signal Processing (ICASSP-79)).
In the latter method (spectrum subtraction method), firstly, peaks are extracted from spectrum (original spectrum), and then a residual spectrum is generated by subtracting the extracted peaks from original spectrum. The processing of the peak extraction and the generation of the residual spectrum is repeated until no peaks are found in the residual spectrum.
In case of
As shown in
In the case of
Harmonic Structure Extractor 14
The harmonic structure extractor 14 makes a group of peaks gathering them along with a harmonic structure which sound source have as nature.
A voice of human, for example, includes a harmonic structure, and the harmonic structure is made of a fundamental frequency and its harmonics. Therefore, the grouping of peaks can be performed for each peak in consideration of this rule.
The peaks allocated to the same group based on the harmonic structure can be assumed as the signal from the same sound source. For example, if two speakers are talking simultaneously, two harmonic structures are extracted.
In the case of
Here, if the frequency of the peak obtained by the frequency analysis is 100 [Hz], 200 [Hz], 300 [Hz], 310 [Hz], 500 [Hz], and 780 [Hz], the frequency of 100 [Hz], 200 [Hz], 300 [Hz], and 500 [Hz] are grouped, and the frequency of 310 [Hz] and 780 [Hz] are ignored.
In the case of
Pitch Extractor 15
The pitch extractor 15 selects, as the pitch of the detected voice, the lowest frequency, i.e. fundamental frequency, of the peak group, which is grouped by the harmonic structure extractor 14. Then, the pitch extractor 15 checks whether or not the pitch is within a predetermined range, that is, the pitch extractor 15 checks whether or not the pitch is within 80 [Hz] and 300 [Hz].
The pitch of the previous time window is adopted instead of the present time window, if the frequency of the peak selected by the pitch extractor 15 is not within this range or if the difference from the pitch of the previous time window exceeds ±50%. If the number of the pitches which corresponds to the duration of phoneme is obtained, an averaging by a duration is performed. Then, the result is supplied to the voice synthesis unit 30 and the color generation unit 50 together with a starting time t and a duration (see
Voice Recognition Unit 20
The voice recognition unit 20 extracts, for each shift interval, the feature (this is different from “feature value” of the present invention) of the inputted voice based on the spectrum supplied from the frequency analyzer 12. Then, the voice recognition unit 20 recognizes a phoneme of voice by the extracted feature. As the feature of the voice, a liner spectrum, Mel-frequency cepstrum coefficient, and LPC cepstrum are adoptable.
Additionally, the recognition of the phoneme can be performed by HMM (Hidden Markov Model) using the correlation between a sound model and a phoneme stored beforehand.
When the phoneme is extracted, a phoneme sequence, which is the list of the detected phoneme, and a starting time and duration of each phoneme are thus obtained. Here, a starting time is the time the speaker began to speak, and this starting time may be assigned to “0”.
Voice Signal Generation Unit 30
The voice synthesis unit 30 includes a voice synthesizer 31 and a wave-form template database 32. This voice synthesis unit 30 generates signal of a voice to be uttered based on sound pressure data, pitch data, phoneme data, and data stored in wave-form template database 32. Here, sound pressure data, pitch data, and phoneme data are feature value to be entered from the feature extraction unit 10. The wave-form template database 32 stores phoneme and voice waveform which are being correlated each other.
Voice Synthesizer 31
The voice synthesizer 31 refers to the wave-form template database 32 based on phoneme data entered from the feature extraction unit 10, and performs a readout of a voice waveform, which serves as a template and corresponds to phoneme data. Here, the voice waveform which serves as a template is referred to as “wave-form template”.
Then, the voice synthesizer 31 modulates the wave-form template in compliance with the sound pressure and pitch when sound pressure data and pitch data are entered from the feature extraction unit 10. For example, when the wave-form template having the shape of
If the pitch frequency of pitch data is 120 [Hz] and the pitch of the wave-form template is 100 [Hz], the wave-form template is doubled by 100/120 in the direction of a time-axis. Then, the wave-form obtained by this modulation is connected so that the length of the connected wave-form becomes the same length as the length of the duration of the phoneme. Thereby, the voice waveform is synthesized, and is entered to the voice output unit 40. After synthesizing the phoneme which has the same length to the duration of the inputted phoneme, next phoneme is inputted and the same process is repeated. When all phonemes are synthesized, they are connected and an obtained wave-form is served to the voice output unit 40.
Voice Output Unit 40
The voice output unit 40 makes the wave-form entered from the voice synthesizer 31 to voice signal, and outputs the voice signal to the speaker unit S. That is, the voice output unit 40 performs the D/A conversion of the voice waveform to obtain voice signal. Then, the voice output unit 40 amplifies the voice signal and transmits the voice signal to the speaker unit S at a suitable timing. In this embodiment, for example, the voice signal may be transmitted three seconds after the termination of the utterance of the speaker.
Complexion Generation Unit 50
As shown in
Emotion Estimation Part 51
The emotion estimation part 51 estimates the emotion of the speaker based on sound pressure data, pitch data, and phoneme data, which are entered from the feature extraction unit 10, and data stored beforehand within a first emotion database 51a.
The first emotion database 51a is generated as a result of learning.
As shown in
The learning part 51c computes feature quantities, which are used for the estimation of the emotion, from the feature value extracted from the voice, and then generates data (correlation data) to be obtained by correlating a feature quantity with an emotion.
Generally, since a pitch, a duration of a phoneme and a volume (a sound pressure) reflect the emotion of a speaker, the emotion of the speaker can be estimated in consideration of pitch data, phoneme data, and sound pressure data including correlation data.
The generation of the database is performed as following procedures:
(1) leading a person to read some texts, e.g. 1000 texts, with various approaches. For example, utterance of texts with emotions, such as joy, anger, and sadness, or without emotions (a neutral utterance), is performed;
(2) obtaining sound pressure data, pitch data, and phoneme data by the feature extraction unit 10 and the voice recognition unit 20, after detecting a sound of each utterance of texts by the person using the microphone M;
(3) computing some kind of feature quantities (see below) by the learning part 51c from each of sound pressure data, pitch data, and phoneme data; and
(4) correlating the emotion of each utterance with each of computed feature quantity.
Feature Quantity
The feature quantity to be computed in the above procedure (3) is obtained as follows.
fav: an average of pitch frequency (an average of a pitch being included in a predetermined section).
pav: an average sound pressure data (an average of a sound pressure being included in a predetermined section).
d: a phoneme density (a value obtained by dividing the number n of a phonemes being included in a predetermined section by the time of the predetermined section).
fdif: an average pitch variation rate (a variation rate of pitch frequency in the predetermined section which is obtained based on average value of the pitch frequency of each subsections, which are generated by dividing the predetermined section into further three sub-sections. For example, obtaining “fdif” as a slope value of a linear function which approximate the relation between time and the average value of pitch).
pdif: an average sound pressure variation rate (an variation rate of sound pressure in the predetermined section which is obtained based on average value of the sound pressure data of each subsections, which are generated by dividing the predetermined section into further three sub-sections. For example, obtaining “pdif” as a slope value of a linear function which approximate the relation between time and the average value of sound pressure data.
ffav/Fav: a pitch index (the rate to Fav of fav of the predetermined section).
pav/Pav: a sound pressure index (the rate to Pav of pav of the predetermined section).
n/N: a phoneme index (the rate to N of n).
Here, Fav denotes an average pitch frequency which is an average of whole of the pitch frequencies included in the utterance. Pav is an average power which is an average of whole of the sound pressure data in the utterance. N is an average of the number of the phoneme in the utterance.
In the present embodiment, additionally, two types of databases are prepared as the first emotion database 51a. One is the database generated based on the utterance of a specific person, and the other is the database generated based on the utterance of non-specific person. Here, the database for non-specific person is generated by averaging the feature quantities which are obtained from the utterance of a plurality of persons
The first emotion database 51a stores the data which is obtained by correlating an emotion, a phoneme sequence, and each feature quantity. Here, feature quantity is at least one feature quantity among eight feature quantities (see
If the content of the text is “Saviola ga Monaco e kigentsuki no iseki wo shita”, for example, the utterance of the text is performed about each emotions (happiness, anger, sadness, and neutral). Then, each utterance with each emotion is divided into predetermined sections, e.g. three sections of equal time-length.
In this embodiment, alternatively, predetermined sections may be divided at the inflection point of the in whole utterance or based with same phoneme number. At least one of the eight feature quantities is calculated about each section.
In
The emotion database of present embodiment is not limited to the first emotion database 51a. For example, the following second emotion database may be used as the emotion database instead of the first emotion database 51a.
In the second emotion database, the data, which is obtained by correlating at least one feature quantity among eight feature quantities with the emotion, is included. Therefore, the data relating to the phoneme is not included.
The data stored in the second database is the data obtained as a result of the learning (statistical learning). Here, the learning is performed as follows; firstly, each feature quantity shown in
For example, if the number of the texts is 100, a total of 100 feature quantities assigned to “joy” are obtained. Thus, the learning of three-layer perception is performed using the obtained feature quantities (here, the input layer is correlated to the number of feature quantities and the middle layer is arbitrary), as the training data. The learning is similarly performed for feature quantities assigned to each group of “joy”, “sadness”, and “neutral”.
According to this manner, a neural network, in which feature quantities and the emotion are correlated each other, is obtained (see
An estimation part 51b divides an inputted voice into three time-sections of equal length as well as the processing at the time of learning, and computes feature quantities applied for the first emotion database 51a, from sound pressure data, phoneme data, and pitch data. That is, in the case of
This computing is performed by calculating the euclidean distance between feature vectors of inputted voice and a correspondence in the first emotion database 51a. In this embodiment, for example, one of vectors is the vector in which the obtained phoneme density d1, d2, and d3, the average pitch variation rate fdif1, fdif2, and fdif3, and phonemes of the inputted voice are adopted as an element of the vector. The other vector is the vector in which each phoneme density d1
When using the second emotion database, on the contrary, the estimation part 51b divides an inputted voice into three predetermined section as well as the processing at the time of learning of the first emotion database 51a, and computes the feature quantity applied for the second emotion database, from sound pressure data, phoneme data, and pitch data. That is, the estimation part 51b computes the phoneme density d1, d2, and d3 and the average pitch variation rate fdif1, fdif2, and fdif3. Then, the computed feature quantities are processed under a predetermined procedure, which was generated through the learning of the relation between the feature and the emotion, and then the emotion is estimated based on the output result of the predetermined procedure. In this embodiment, for example, neural-network, SVM, or other statistic methods corresponds to this predetermined procedure.
When the estimation of the emotion is performed using the second database, the emotion of the speaker can be estimated without relying on the phoneme. The estimation of the emotion can be enabled even in the case where the speaker utters words or sentences which have been never heard before.
In the case of the words or the sentences which are often spoken, on the other hand, the use of the first emotion database 51a which relies on the phoneme provides the increased accuracy of the estimation. Therefore, the flexible and highly accurate estimation of the emotion can be enabled by providing both of the first emotion database 51a and second emotion database and switching databases in accordance with the types of the language of the speaker.
Emotion Input Part 52
The emotion input part 52 is used for inputting the emotion by the operation of the user, such as a speaker, and is provided with a mouse, a keyboard, and a specific button for enabling the input of the types (e.g. joy, anger, and sadness) of the emotion.
In this embodiment, the provision of the emotion input part 52 is discretional. The information transmission device may include a device for inputting the strength of the internal state, e.g. the expressed emotion, in addition to the types of the emotion. In this case, for example, the input of the strength of the emotion may be achieved by using the number between 0 to 1.
Color Output Part 53
The color output part 53 (a color output part and a second color output part) expresses the emotion entered from the emotion estimation part 51 or the emotion input part 52, and includes a color selector 53a, a color intensity modulator 53b, and a color adjustor 53c.
The color selector 53a selects the color in consideration of the emotion to be entered. The correlation between the emotion and the color is determined based on the investigation in the area of color psychology, e.g. Scheie's color psychology. In this embodiment, for example, the emotion of “joy” is indicated by “yellow”, the emotion of “anger” is indicated by “red”, and the emotion of “sadness” is indicated by “blue”, and the relation between the emotion and the color is determined and stored beforehand. If the emotion to be estimated is “neutral”, since it is not required to change the color, the processing with regard to the color is terminated.
The color intensity modulator 53b computes the intensity of the color for each phoneme data. That is, the color intensity modulator 53b computes intensity of the light. In this embodiment, the intensity of the light is denoted using the number 0 to 1. If the input of phoneme data has been started, i.e. if the utterance has been started, the color intensity modulator 53b outputs “1”, and if the input of phoneme data has been terminated, i.e. if the utterance has been terminated, the color intensity modulator 53b outputs “0”. Here, if the intensity of the emotion was inputted by user's operation, the color intensity modulator 53b outputs the intensity which was entered by user.
The color adjustor 53c adjusts the output to the LED 60 which served as an expression device based on the color entered from the color selector 53a and the intensity of color entered from the color intensity modulator 53b.
Here, if at least one LED 60 is installed on the head RH of the robot R as shown in
Here, if the information transmission device 1 has a display, the indication of the color may be performed using the display, in which the head Rh of the robot R is expressed therein. In this case, for example, as shown in
Next, the motion of the information transmission device 1 having the above described components will be explained with reference to the flowchart of
Firstly, a frequency analysis of sound signal detected by the microphone M is performed for each time window of 25 [msec] by the frequency analyzer 12 (S1). Then, the sound recognition is performed by the voice recognition unit 20 based on the relation between the phoneme and the sound model, and then the phoneme is extracted (S2). The phoneme which has been extracted is outputted together with duration to the sound pressure analyzer 11, the pitch extractor 15, and the voice synthesis unit 30.
Next, the sound pressure is computed by the sound pressure analyzer 11 (S3), and sound pressure data is entered to the voice synthesis unit 30 and the color generation unit 50. In this occasion, since the data relating to the duration of the phoneme is entered from the voice recognition unit 20, the sound pressure is computed for each phoneme.
Then, the peak extractor 13 detects, for extracting the pitch, the peak from the result of the frequency analyzer 12 (S4), and extracts the harmonic structure from the frequency arrangement of the detected peak (S5).
Then, the peak which has a lowest frequency among peaks within the harmonic structure is selected, and if the frequency of this peak is within 80 [Hz] to 300 [Hz] this peak is regarded as pitch. If the peak is not within 80 [Hz] to 300 [Hz], other peak which satisfies this requirement is selected as the pitch (S6).
Next, the emotion estimation part 51 of the color generation unit 50 computes the feature quantities (d1, fdif) from sound pressure data, phoneme data, and pitch data, and compares them to the feature quantities in the first emotion database 51a. Then, the emotion estimation part 51 estimates the emotion by choosing an emotion whose feature quantities are closest to inputted voice's feature quantities (S7).
Next, the color output part 53 selects the color which is proper for the emotion estimated by the color generation unit 50, based on the relation between the color and the emotion, stored beforehand. Then, the color output part 53 adjusts, based on the intensity of the emotion, the intensity (the number of LED 60) of the internal state (light) to be expressed (S8).
On the contrary, the voice synthesis unit 30 generates voice signal in compliance with the diction of the speaker (S9-S16). In other words, the voice synthesis unit 30 generates voice signal having the same feature quantities.
To be more precise, firstly, pitch frequency, phoneme data, and sound pressure data are entered to the voice synthesizer 31 (S9).
Additionally, duration of phoneme is readout (S10). Then, the wave-form template which is the same as the phoneme data is selected with reference to the wave-form template database 32 (S11).
The modulation of the wave-form template is performed in compliance with the sound pressure data and pitch frequency (S12 and S13). By this operation, voice signal to be sounded by the information transmission device 1 agrees with the loudness and pitch of the speaker.
Next, the modulated wave-form template is connected with wave-form templates that have already modulated and connected (S14).
If the duration of the wave-form template that has been connected is shorter than the duration of the phoneme, the connection of the wave-form template is repeated (S14). If not (S15, Yes), it can be regarded that enough waves have been connected for the phoneme. Thus, the processing proceeds to next processing.
Then, if next phoneme data exists (S16, Yes), the processing of steps from S9 to S16 is repeated to generate sound signal of the phoneme. If next phoneme data does not exist (S16, No), the synthesized voice is outputted together with the output (indication) of the color (S17).
According to the information transmission device 1 of the present embodiment, information is transmitted with a voice signal which is synthesized in accordance with the diction of the speaker. That is, since the apparatus adopts the same diction of the speaker, the speaker can sympathize with the apparatus, and information may be transmitted smoothly.
In this embodiment, additionally, the emotion of the speaker is estimated and the color corresponding to the emotion is appeared together with the utterance. This provides the speaker the feeling of as if the apparatus has recognized the emotion of the speaker. Thereby, this enables the intimate communication and will be useful to the dissolution of digital divide.
Although there have been disclosed what are the patent embodiment of the invention, it will be understood by person skilled in the art that variations and modifications may be made thereto without departing from the scope of the invention, which is indicated by the appended claims.
In this embodiment, for example, the utterance is performed by mimicking the feature about the sound pressure and pitch of the speaker. But, an utterance may be performed by mimicking the utterance speed of the speaker.
In this case, for mimicking the utterance speed of the speaker, the utterance speed of the speaker is identified by computing an average of the phonemes in utterance. Then the duration of the phoneme is changed in compliance with the utterance speed. Thereby, the word utterance suitable for the utterance speed of the speaker is enabled.
According to this construction, since the information transmission device 1 utters words slowly when an elderly person utters words slowly to the information transmission device 1, the comprehension of the uttered words becomes easy for an elderly person.
On the contrary, since the information transmission device 1 rapidly utters words when an impatient person rapidly utters words to the information transmission device 1, an impatient person is not irritated. Thus, smooth communication is attained by adjusting the utterance speed in accordance with the speaker.
Typically, the present invention can be easily represented by performing the calculation and analysis based on sound data using the program installed beforehand in a computer, which has a CPU and a recording unit, etc. But, this general-purpose computer is not always required, and the present invention can be represented by using an apparatus equipped with an exclusive circuit.
In the wave-form template database 32, additionally, it is not always required that one wave-form template is correlated with one phoneme. A plurality of wave-form templates may be correlated with the same phoneme. In this case, the voice waveform may be generated by connecting wave-form templates which were selected from among a plurality of wave form templates.
For example, the wave-form template database can store therein a plurality of wave-form templates (e.g. 2500 different species), each of which differs in a pitch, time length, and a sound pressure for each phoneme.
In this case, the voice synthesizer 31 selects the wave-form template, which has an element closest to the phoneme to be uttered, in pitch, sound pressure, and duration, about each phoneme to be uttered. Then, the voice synthesizer 31 generates the voice by connecting wave-form templates after performing a fine-tuning of the pitch, sound pressure, and duration of the wave-form templates.
In this embodiment, additionally, the region where the color is changed in compliance with the emotion of the speaker is not limited to the head. The color of the part of the regions visible from an outside or whole of the regions visible from an outside may be changed instead of the head.
Number | Date | Country | Kind |
---|---|---|---|
2004-267378 | Sep 2004 | JP | national |
2005-206755 | Jul 2005 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4590605 | Hataoka et al. | May 1986 | A |
4783805 | Nishio et al. | Nov 1988 | A |
5636325 | Farrett | Jun 1997 | A |
5845047 | Fukada et al. | Dec 1998 | A |
5860064 | Henton | Jan 1999 | A |
5933805 | Boss et al. | Aug 1999 | A |
5966690 | Fujita et al. | Oct 1999 | A |
6151571 | Pertrushin | Nov 2000 | A |
6161091 | Akamine et al. | Dec 2000 | A |
6182044 | Fong et al. | Jan 2001 | B1 |
6442450 | Inoue et al. | Aug 2002 | B1 |
6549887 | Ando et al. | Apr 2003 | B1 |
6799162 | Goronzy et al. | Sep 2004 | B1 |
6836761 | Kawashima et al. | Dec 2004 | B1 |
6865533 | Addison et al. | Mar 2005 | B2 |
6963841 | Handal et al. | Nov 2005 | B2 |
20010032078 | Fukada | Oct 2001 | A1 |
20020049594 | Moore et al. | Apr 2002 | A1 |
20020110248 | Kovales et al. | Aug 2002 | A1 |
20020133333 | Ito et al. | Sep 2002 | A1 |
20020184373 | Maes | Dec 2002 | A1 |
20030093265 | Xu et al. | May 2003 | A1 |
20030182111 | Handal et al. | Sep 2003 | A1 |
20040148172 | Cohen et al. | Jul 2004 | A1 |
Number | Date | Country |
---|---|---|
06-139044 | May 1994 | JP |
2001-215993 | Aug 2001 | JP |
2002-066155 | Mar 2002 | JP |
2002-264053 | Sep 2002 | JP |
2003-084800 | Mar 2003 | JP |
2003-150194 | May 2003 | JP |
2004-061666 | Feb 2004 | JP |
2004-109323 | Apr 2004 | JP |
Number | Date | Country | |
---|---|---|---|
20060069559 A1 | Mar 2006 | US |