The present invention relates to a food processor with phonetic recognition ability, and more particularly, to a food processor capable of identifying characters containing in a sound signal issued from a user and thus forming the characters on its food products accordingly.
In recent years, there are more and more speech/phonetic recognition systems being widely applied in various technical fields, such as telephone voice system, voice input device or media interactive device, and so on.
One of which is a multi-language speech recognition method and system disclosed in TW. Pat. Publ. No. 574684. The aforesaid speech recognition method includes the steps of: receiving information reflecting the speech, determining at least one broad-class of the received information, classifying the received information based on the determined broad-class, selecting a model based on the classification of the received information, and recognizing the speech using the selected model and the received information. Thereby, the disadvantages of the match-trained Hidden Markov Model (HMM), i.e. the parameters of the match-trained HMM are tuned to its match channel environment and the match-trained HMM may recognize speech in its match channel environment more accurately than a mix-trained HMM. However, the match-trained HMM may not recognize speech in a non-matching channel environment as well as the mix-trained HMM, can be improved.
One another such speech/phonetic recognition system is an independent real-time speech recognition system disclosed in TW. Pat. Publ. No. 219993. In the aforesaid system, a speech signal is first being converted into an analog speech signal that is then being fed to an amplifier for amplification, and then the amplified analog speech signal is converted into a serial digital signal and further into a parallel digital signal by the use of a analog-to-digital converter. Thereafter, a digital processor is used for performing a preprocessing operation, a feature extracting operation and a voice activity detection so as to obtain a multi-level fixed-point linear predictive coefficient, that is stored in a training process to be used as refereeing sample, and is measured by a symmetry-rectified dynamic programming matching algorithm and compared with referencing samples for obtaining a speech recognition result.
Moreover, there is an emotion-based phonetic recognition system disclosed in TW. Pat. Publ. No. 1269192, which includes a classification algorithm and an emotion classification module established basing upon a field-independent emotion database containing emotions responding to specific phonetic notations. The aforesaid emotion classification module is embedded with an automatic rule generator capable of performing a data-mining centering on phonetic notations that is able to map a speech into a vector space according to the emotion-inducing elements concluded from emotion psychology and thereby performs a training process for classifying the emotion of the speech. Accordingly, the aforesaid emotion-based phonetic recognition system is able to effective improve the emotion communication ability of a human-machine interface as one of the interesting challenges in the community of human-computer interaction today is how to make computers be more human-like for intelligent user interfaces.
Furthermore, there is a method and system for phonetic recognition disclosed in TW. Pat. Publ. No. 508564. In the method and system for phonetic recognition, the phonetic sound can be analyzed in timbre characteristic for allowing the user's timbre to be recognized, while variation in volume of the phonetic sound can be analyzed so as to tell the user's emotional condition.
In addition to the aforesaid patents, there are many U.S. patents relating to emotion and phonetic recognition that are able to recognize a human emotion through the detection of pulse, heart beat or respiration rate, etc., and are applicable to lie detectors.
However, among those related patents or those consumer products currently available on the market, there is no food processor that is designed with phonetic/speech recognition function for facilitating a use to interact with the food processor through voice communication and thus directing the operation of the food processor.
In view of the disadvantages of prior art, the object of the present invention is to provide a food processor with phonetic recognition ability capable of identifying characters containing in a sound signal issued from a user and thus forming the characters on its food products accordingly.
To achieve the above object, the present invention provides a food processor with phonetic recognition ability, comprising: a phonetic recognition module, capable of receiving sound signals so as to identify a content of characters, phrases, and sentences, containing in the received sound signals; and a food processing module, capable of producing food products containing characters, phrases, and sentences corresponding to the phonetic recognition result of the phonetic recognition module.
Further scope of applicability of the present application will become more apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating several embodiments of the invention, are given by ways of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.
The present invention will become more fully understood from the detailed description given herein below and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present invention and wherein:
For your esteemed members of reviewing committee to further understand and recognize the fulfilled functions and structural characteristics of the invention, several exemplary embodiments cooperating with detailed description are presented as the follows.
Please refer to
As shown in
With the aforesaid food processor 10, there can be various types of characters capable of being formed on the food product as the embodiments shown in
Taking those embodiments shown in
Please refer to
For instance, when a user is reading out the sentence “I love you” to the food processor 20 in a happy mood, his/her tone should be soft and affective so that as the sentence “I love you” is recognized by the phonetic recognition module 21, the happy emotion is simultaneously detected by the emotion recognition module 22, and thus the emotion recognition module 22 will direct the food supplying unit 221 to provide a sweet chocolate to the character generating unit 222 for having the eight characters “I”, “L”, “O”, “V”, “E”, “Y”, “O”, and “U” to be formed therein as a sentence, or shaping the sweet chocolate into the form of the sentence “I love you”. However, if the user is reading out another sentence “I hate you” detestfully to the food processor 20, his/her tone should be rigid and relentless so that as the eight characters “I”, “H’, “A”, “T”, “E”, “Y”, “O”, and “U” are recognized by the phonetic recognition module 21, the gloomy emotion is simultaneously detected by the emotion recognition module 22, and thus the emotion recognition module 22 will direct the food supplying unit 221 to provide a bitter chocolate to the character generating unit 222 for having the sentence “I hate you” to be formed therein, or shaping the sweet chocolate into the form of the sentence “I hate you”. Thus, different users are able to read out different words or sentences in different emotions so as to obtain food products not only having different characters formed therein, but also with different taste corresponding to their current moods. Similarly, in addition to the ability to form a word or characters on its food products, the food processor of the invention is capable of forming phrases or sentences on the food products and at the same time recognize the emotion implied in the tone of the speech; and also the emotion recognition ability of the food processor 20 of the invention is not limited to English, it can also recognize other languages, such as Mandarin. For instance, the operation of the food processor 20 can be described as following: at first, a user reads out loud the in Mandarin to the food processor 20, at which the voices of the user is received by the phonetic recognition module 21 and then the four characters and are recognized thereby as the gloomy emotion is simultaneously detected by the emotion recognition module 22, and thus the emotion recognition module 22 will direct the food supplying unit 221 to provide a bitter chocolate to the character generating unit 222 for having the four characters and to be formed therein, or shaping the sweet chocolate into the four characters and Thereby, different Mandarin-speaking users are able to read out different words or sentences in different emotions so as to obtain food products not only having different characters formed therein, but also with different tastes corresponding to their current moods. It noted that there is no restriction relating to the recognition ordering of the phonetic recognition module 21 and the emotion recognition module 22. In the aforesaid embodiments, the phonetic recognition module 21 is first being activated for recognizing characters before the emotion recognition module 22 is activated for recognizing emotions, However, the ordering can be reversed or even performed simultaneously.
When the information of the food product displayed on the displaying unit 33 is confirmed by the user through the input unit 34, the food processing module 38 will be activated to produce the food product accordingly and then send the resulting food product to the exit 35 where it can be retrieved by the user, which is a chocolate having the sentence “I LOVE YOU” formed thereon, as the chocolate 60E shown in
To sum up, the present invention provides a food processor with phonetic recognition ability capable of identifying characters and emotions containing in a sound signal issued from a user and thus forming the characters, phrases, or sentences on its food products with corresponding taste accordingly. As the conventional automatic food venders are only capable of providing standardized food products to every users, the food processor of the invention is able to provide custom-made food products of different outlooks and tastes according to user requirements and responding to their moods so that the users are able to interact with the food processor of the invention, it can greatly encouraging the interest and willingness of consuming.
The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
97141323 A | Oct 2008 | TW | national |
Number | Name | Date | Kind |
---|---|---|---|
5960399 | Barclay et al. | Sep 1999 | A |
6249710 | Drucker et al. | Jun 2001 | B1 |
6282507 | Horiguchi et al. | Aug 2001 | B1 |
6408272 | White et al. | Jun 2002 | B1 |
6842510 | Sakamoto | Jan 2005 | B2 |
7184960 | Deisher et al. | Feb 2007 | B2 |
7808368 | Ebrom et al. | Oct 2010 | B2 |
8078472 | Resch et al. | Dec 2011 | B2 |
20020111794 | Yamamoto et al. | Aug 2002 | A1 |
20050015256 | Kargman | Jan 2005 | A1 |
20060129408 | Shen et al. | Jun 2006 | A1 |
20090292541 | Daya et al. | Nov 2009 | A1 |
20100070276 | Wasserblat et al. | Mar 2010 | A1 |
20110029314 | Lin et al. | Feb 2011 | A1 |
Number | Date | Country |
---|---|---|
1389852 | Jan 2003 | CN |
1677487 | Oct 2005 | CN |
1828682 | Sep 2006 | CN |
102222164 | Oct 2011 | CN |
64-085760 | Mar 1989 | JP |
04-189141 | Jul 1992 | JP |
2003108183 | Apr 2003 | JP |
200310112 | Jul 2003 | JP |
2003294235 | Oct 2003 | JP |
2005254495 | Sep 2005 | JP |
2005261286 | Sep 2005 | JP |
2008228722 | Oct 2008 | JP |
20040038419 | May 2004 | KR |
20080086791 | Sep 2008 | KR |
20100001928 | Jan 2010 | KR |
219993 | Feb 1994 | TW |
508564 | Nov 2002 | TW |
517221 | Jan 2003 | TW |
574684 | Feb 2004 | TW |
200620242 | Jun 2006 | TW |
I269192 | Dec 2006 | TW |
200837716 | Sep 2008 | TW |
Entry |
---|
State Intellectual Property Office of the People's Republic of China, “Office Action”, Aug. 22, 2012, China. |
Japan Paten Office, “Office Action”, Jan. 10, 2012, Japan. |
Raul Fernandez, A Computational Model for the Automatic Recognition of Affect in Speech, Doctor of Philosophy, Feb. 2004, P1-284, Massachusetts Institute of Technology. |
Intellectual Property Office, Ministry of Economic Affairs, R.O.C., “Office Action”, Apr. 10, 2012, Taiwan. |
Number | Date | Country | |
---|---|---|---|
20100104680 A1 | Apr 2010 | US |