The disclosure relates to robot apparatuses and, more particularly, to a robot apparatus with a vocal interactive function and a vocal interactive method for the robot apparatus according to weighted values of all output data corresponding to a conversation voice.
There are a variety of robots in the market today, such as electronic toys, electronic pets, and the like. Some robots may output a relevant sound when detecting a predetermined sound from the ambient environment, such as a user. However, when the predetermined sound is detected, the robot would only output one predetermined kind of sound. Generally, before the robot is available for market distribution, manufactures store predetermined input sounds, predetermined output sounds, and relationships between the input sounds and the output sounds in the robot apparatus. When detecting an environment sound from the ambient environment, the robot outputs an output sound according to a relationship between the input sound and the output sound. Consequently, the robot only outputs one fixed output according to one fixed input, making the robot repetitiously dull and boring.
Accordingly, what is needed in the art is a robot apparatus that overcomes the aforementioned deficiencies.
A robot apparatus with a vocal interactive function is provided. The robot apparatus comprises a microphone, a storage unit, a recognizing module, a judging module, a selection module, an output module, and an updating module. The microphone is configured for collecting a vocal input from a user, wherein the vocal input is a conversation voice or an evaluation voice, and the evaluation voice is a response to the robot apparatus output. The storage unit is configured for storing a plurality of output data corresponding to conversation voices, a weighted value of each of the output data, and an evaluation level table, wherein the evaluation level table stores a plurality of evaluation voices and an evaluation level of each of the evaluation voices, and the weighted value of the output data is a direct ratio to the evaluation level of an evaluation voice responding to the output data. The recognizing module is capable of recognizing the vocal input.
The judging module is capable of determining that the vocal input is a conversation voice or an evaluation voice. The selection module is capable of selecting one of the output data based on the weighted values of all the output data corresponding to a conversation voice, if the vocal input is a conversation voice. The output module is capable of outputting the selected output data responding to the conversation voice and recording the selected output data. The updating module is capable of acquiring an evaluation level of the evaluation voice responding to the output data in the evaluation level table, calculating the weighted value of the output data according to the evaluation level, and updating the weighted value, if the vocal input is an evaluation voice.
Other advantages and novel features will be drawn from the following detailed description with reference to the accompanying drawings.
The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the robot apparatus. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
In the exemplary embodiment, the vocal interactive control unit 50 is configured for controlling the robot apparatus 1 to enter a vocal interactive mode or a silent mode. When the robot apparatus 1 is in the vocal interactive mode, the processing unit 30 controls the microphone 10 to collect analog signals of a vocal input from the user. The A/D converter 20 converts the analog signals of the vocal input into digital signals. The processing unit 30 recognizes the digital signals of the vocal input and determines that the vocal input is a conversation voice or an evaluation voice.
When the robot apparatus 1 is in the silent mode, even if the microphone 10 collects the analog signals of the vocal input, the robot apparatus 1 does not respond to the vocal input and generate any output. In another exemplary embodiment, the robot apparatus 1 collects the vocal input at any time and responds to the vocal input.
The storage unit 40 stores a plurality of output data, an output table 401, and an evaluation level table 402. The output table 401 (see
The weighted value column records a weighted value assigned to the output data. For example, a weighted value of the output data B3 is WB3. The weighted value can be preconfigured according to a preference. The preference can be based on being the dad, the mom, etc. For example, the weighted value of a more preferred output can be increased manually and the weighted value of a less favored output can be decreased manually.
The evaluation level table 402 is configured for responding and evaluating all the output data in the output table 401. The evaluation level table 402 (see
The weighted value of the output data is a direct ratio to the evaluation level of an evaluation voice responding to the output data. That is, the higher the evaluation level of the evaluation voice is, the higher the weighted value of the output data is.
The processing unit 30 includes a recognizing module 301, a judging module 302, a selection module 303, an output module 304, and an updating module 305.
The recognizing module 301 is configured for recognizing the digital signals of the vocal input from the A/D converter 20. The clock unit 80 is configured for measuring time. The judging module 302 is configured for determining that the vocal input is the conversation voice or the evaluation voice. In the exemplary embodiment, the judging module 302 acquires current time from the clock unit 80, and judges whether the robot apparatus 1 generated output data in a predetermined time period before the current time. For example, if the predetermined time period is 30 seconds and the current time is 10:20:30 pm, so the judging module 302 judges whether the robot apparatus 1 generated the output data from 10:20 pm to 10:20:30 pm.
If the robot apparatus 1 did not generate the output data in the predetermined time period before the current time, the judging module 302 determines that the vocal input is a conversation voice. The selection module 303 is configured for acquiring all the output data corresponding to the conversation voice in the output table 401, and selecting one of the output data based on the weighted values of all the acquired output data. That is, the higher the weighted value of the selected output data is, the higher the probability of being selected is. For example, suppose the conversation voice is A and the weighted values WA1, WA2, WA3, of all the output data A1, A2, A3 are 5, 7, 9, the selection module 303 selects the output data A3 because the output data A3 has the highest weighted value.
The output module 304 is configured for acquiring the selected output data in the storage unit 40, outputting the selected output data, and recording the selected output data and the current time from the clock unit 80. The D/A converter 60 converts the selected output data into analog signals. The speaker 70 outputs a vocal output of the selected output data.
If the robot apparatus 1 generated the output data in the predetermined time period before the current time, the judging module 302 judges whether the vocal input is from the evaluation level table 402, that is, whether the vocal input is the evaluation voice in the evaluation level table 402. If the vocal input is from the evaluation level table 402, the judging module 302 determines that the vocal input is the evaluation voice for the output data, the updating module 305 is configured for acquiring an evaluation level of the evaluation voice from the evaluation level table 402, calculating the weighted value of the output data according to the evaluation level, and updating the weighted value in the output table 401. For example, if the output data is A1 and the evaluation voice responding to the output data is b2, the weighted value of the output data is V′A1=f{VA1, (Xb)}, wherein V′A1 is the updated weighted value, VA1 is the previous weighted value, and Xb is the evaluation level of the evaluation voice b2.
Once the robot apparatus 1 receives an evaluation voice responding to output data, that is, the robot apparatus 1 generated the output data in the predetermined time period before the current time before receiving the evaluation voice, the updating module 305 updates the weighted value of the output data based on an evaluation level of the evaluation voice. If the robot apparatus 1 does not acquire an evaluation level of an evaluation voice responding to the output data, the weighted value of the output data is the same. In another exemplary embodiment, the judging module 302 directly determines that the vocal input is the conversation voice or the evaluation voice, that is, whether the vocal input is from the evaluation level table 402. If the vocal input is from the evaluation level table 402, the vocal input is an evaluation voice. If the vocal input is not from the evaluation level table 402, the vocal input is a conversation voice.
If the robot apparatus 1 did not generate the output data, in step S130, the judging module 302 determines that the vocal input is a conversation voice. In step S132, the selection module 303 selects one of the output data corresponding to the conversation voice according to the weighted values of all the output data. In step S134, the output module 303 acquires and outputs the selected output data in the storage unit 40, the D/A converter 60 converts the selected output data into the analog signals, the speaker 70 outputs the vocal output of the selected output data, and the output module 304 records the selected output data and the current time.
If the robot apparatus 1 generated the output data, in step S140, the judging module 302 judges whether the vocal input is from the evaluation level table 402. If the vocal input is not from the evaluation level table 402, the judging module 402 determines that the vocal input is the conversation voice, that is, the robot apparatus 1 receives another conversation voice from the user, and the procedure returns to step SI 30. If the vocal input is from the evaluation level table 402, in step S150, the judging module 302 determines that the vocal input is the evaluation voice responding to the output data. In step S160, the updating module 305 acquires the evaluation level corresponding to the evaluation voice. In step S170, the updating module 305 calculates the weighted value of the output data according to the acquired evaluation level and updates the weighted value. When the robot apparatus 1 receives a vocal input from the user again, the procedure starts again.
In addition, in another exemplary embodiment, after the recognizing module 301 recognizes the vocal input, the judging module 302 directly determines that the vocal input is a conversation voice or an evaluation voice, that is, the judging module 302 judges whether the vocal input is from the evaluation level table. In other words, the method is performed without step S120.
It is understood that the invention may be embodied in other forms without departing from the spirit thereof. Thus, the present examples and embodiments are to be considered in all respects as illustrative and not restrictive, and the invention is not to be limited to the details given herein.
Number | Date | Country | Kind |
---|---|---|---|
200710124554.2 | Nov 2007 | CN | national |