The present invention relates to robot apparatuses and, more particularly, to a robot apparatus with a vocal interactive function and a vocal interactive method for the robot apparatus according to weighted values of all output data corresponding to a vocal input.
There are a variety of robots in the market today, such as electronic toys, electronic pets, and the like. Some robots may output a relevant sound when detecting a predetermined sound from an ambient environment. However, when the predetermined sound is detected, the robot would only output one predetermined kind of sound. Generally, before the robot is available for market distribution, manufactures store predetermined input sounds, predetermined output sounds, and relationships between the input sounds and the output sounds in the robot apparatus. When detecting an environment sound from the ambient environment, the robot outputs an output sound according to a relationship between the input sound and the output sound. Consequently, the robot only outputs one fixed output according to one fixed input, making the robot repetitiously dull and boring.
Accordingly, what is needed in the art is a robot apparatus that overcomes the aforementioned deficiencies.
A robot apparatus with a vocal interactive function is provided. The robot apparatus comprises a microphone, a storage unit, a recognizing module, a selecting module, an output module, a counting module, and an updating module. The microphone is configured for collecting a vocal input. The storage unit is configured for storing a plurality of output data, a last output time of each of the output data, and a weighted value of each of the output data, wherein the weighted value is an inverse ratio to the last output time of the output data. The recognizing module is configured for recognizing the vocal input.
The selecting module is configured for acquiring all the output data corresponding to the vocal input in the storage unit and selecting one of the output data based on the weighted values of all the acquired output data. The output module is configured for outputting the selected output data. The output-time updating module is configured for updating the last output time of the selected output data. The weighted-value updating module is configured for calculating weighted values of all the output data corresponding to the vocal input according to the output count, and updating the weighted values of all the output data.
Other advantages and novel features will be drawn from the following detailed description with reference to the attached drawings.
The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the robot apparatus. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
In the exemplary embodiment, the vocal interactive control unit 50 is configured for controlling the robot apparatus 1 to enter a vocal interactive mode or a silent mode. When the robot apparatus 1 is in the vocal interactive mode, the processing unit 30 controls the microphone 10 to detect and collect analog signals of a vocal input from the ambient environment. The A/D converter 20 converts the analog signals of the vocal input into digital signals. The processing unit 30 recognizes the digital signals of the vocal input and generates output data according to the vocal input.
When the robot apparatus 1 is in the silent mode, even if the microphone 10 detects for the analog signals of the vocal input, the robot apparatus 1 does not output anything according to the vocal input. In another exemplary embodiment of the present invention, the robot apparatus 1 detects and collects the vocal input in real-time and responds to the vocal input.
The storage unit 40 stores a plurality of output data and an output table 401. The output table 401 (see below for a sample table schema) includes a vocal input column, an output data column, a last output time column, and a weighted value column. The vocal input column records a plurality of vocal inputs, such as A, B, and the like. The output data column records a plurality of output data corresponding to the vocal inputs. For example, the output data corresponding to the vocal input A include A1, A2, A3, etc. The output data column further records output data corresponding to an undefined vocal input, which are not recorded in the vocal input column. For example, the output data corresponding to the undefined vocal input include Z1, Z2, Z3, etc.
The last output time column records time that the output data was output recently. For example, last output time of the output data A1, A2, A3 is tA1, tA2, and tA3. A format of the last output time is composed of, for example, XX hour: XX minute on XX month XX date, XXXX year. For example, the last output time tA1 of the output data A1 is 15:20 on May 10, 2007. The weighted value column records a weighted value assigned to the output data. For example, a weighted value of the output data B3 is WB3. The weighted value is an inverse ratio to the last output time of the output data. That is, the later the last output time is, the lower the weighted value is. For example, in an exemplary embodiment, a weighted value WA(X) of the output data A(X) is determined by a function: WA(X)=C(tA1+tA2+tA3+ . . . +tA(X−1))/tA(X), wherein A(X) represents one of the output data corresponding to the vocal input A, and C represents a constant. For example, the weighted value WA1 corresponding to the last output time tA1 15:20 on May 10, 2007 is 7, and the weighted value WA2 corresponding to the last output time tA2 16:25 on May 10, 2007 is 5.
The weighted value can also be preconfigured according to a preference. The preference can be based on being the dad, the mom, the factory, etc. For example, the weighted value of a more preferred output can be increased manually and the weighted value of a less favored output can be decreased manually.
The processing unit 30 includes a recognizing module 301, a selecting module 302, an output module 303, an output-time updating module 304, and a weighted-value updating module 305.
The recognizing module 301 is configured for recognizing the digital signals of the vocal input from the A/D converter 20. The selecting module 302 is configured for acquiring all the output data corresponding to the vocal input in the output table 401 and selecting one of the output data based on the weighted values of all the acquired output data. That is, the higher the weighted value of the acquired output data is, the higher the probability of being selected. For example, suppose the vocal input is A and the weighted values WA1, WA2, WA3, of all the output data A1, A2, A3 are 5, 7, 9, the selecting module 302 selects the output data A3 because the output data A3 has the highest weighted value.
The output module 303 is configured for acquiring the selected output data in the storage unit 40 and outputting the selected output data. The D/A converter 60 converts the selected output data into analog signals. The speaker 70 outputs a vocal output of the selected output data. The output-time updating module 304 is configured for updating the last output time of the selected output data in the output table 401, when the output module 303 outputs the selected output data. The weighted-value updating module 305 is configured for calculating weighted values of all the output data corresponding to the vocal input according to the last output time, and updating the weighted values of all the output data, when the output-time updating module 304 updates the last output time.
In step S140, the output module 303 acquires and outputs the selected output data in the storage unit 40, the D/A converter 60 converts the selected output data into the analog signals, and the speaker 70 outputs the vocal output of the selected output data. In step S150, the output-time updating module 304 updates the last output time of the selected output data. In step S160, the weighted-value updating module 305 calculates weighted values of all the output data corresponding to the vocal input according to the last output time, and updates the corresponding weighted values in the output table 401.
It is understood that the invention may be embodied in other forms without departing from the spirit thereof. Thus, the present examples and embodiments are to be considered in all respects as illustrative and not restrictive, and the invention is not to be limited to the details given herein.
Number | Date | Country | Kind |
---|---|---|---|
200710077338.7 | Sep 2007 | CN | national |