The invention relates to a method and an apparatus for voice control of a device appertaining to consumer electronics.
The operator control of devices appertaining to consumer electronics, such as television sets or video recorders for example, can be simplified for the user by voice control. For instance, it is known to use voice control for changing device settings, executing operator-control functions, such as for example choice of a station, or performing programming operations.
For this purpose, the operator-control commands spoken by the user are initially detected as sound signals, converted into electric signals and digitized. The digitized voice signals are then fed to a speech recognition system. The speech recognition is usually based here on an acoustic model and a speech model. The acoustic model uses a large number of speech patterns, with mathematical algorithms being used to indicate the words which acoustically best match a spoken word. The speech model in turn is based on an analysis in which it is established on the basis of a large number of document samples in which context and how often certain words are normally used.
Current systems provide that the operator-control commands are spoken into the microphone integrated in a remote control unit. Deterioration of the recognition rate caused by disturbing background noises is prevented by the remote control unit being held directly in front of the user's mouth. However, as in the case of conventional remote control units, this requires that the user still has to pick up the remote control unit. Convenience can be enhanced if, for speech input, one or more microphones are provided in the device appertaining to consumer electronics, so that the user can carry out operator control from any desired place in the room without taking along the remote control unit. The required suppression of background disturbances can in this case take place by the use of special microphone arrays and methods such as “statistical beam forming” or “blind source separation”. However, the device being operated is not capable of determining which speech inputs are made by the current user. It is therefore not possible to respond only to these operator-control commands but to ignore utterances by other persons.
A further attempted way of enhancing user convenience is the automatic buffer storage of television programs on hard disks integrated in televisions or settop boxes. After an analysis of the viewing habits, in this case the programs or types of programs which the user has previously chosen regularly are automatically recorded. If the user then switches on his television at any time, he can, with a certain degree of probability, view his favourite programs. However, in the analysis is impaired by the fact that it is not possible to distinguish which user operates the television at which time.
The invention is based on the object of specifying a method for voice control which avoids the aforementioned disadvantages. This object is achieved by the method specified in claim 1.
In principle, the method for the voice control of a device appertaining to consumer electronics consists in converting a user's speech inputs into digitized voice signals. From the digitized voice signals, first features, which are characteristic of individual sounds of the speech, and thus permit recognition of the spoken sounds, are extracted. Furthermore, second features, which permit a characterization of the voice of the respective user and are used for distinguishing between the speech inputs of different users, are extracted from the digitized voice signals. After a voice command from a first user, further voice commands are accepted only from this first user, by testing the further speech inputs for characteristic voice features and only accepting them if they can be assigned to the same speaker on the basis of these features.
It can consequently be ensured that, in given time periods, only one of a number of simultaneous users can operate the device concerned by voice control—similarly to the case in which only one of a number of users has a matching remote control unit.
In particular, it may be advantageous for a voice command for switching on the device to be accepted from any first user and, after that, only voice command inputs from this first user to be accepted.
A voice command for switching off the device may preferably be accepted only from the first user, it being possible after switching off the device for voice commands to be accepted again from any user.
For certain applications, however, it may also be advantageous for a voice command for switching off the device to be accepted from any user.
Similarly, an operator-control command which, after its input by the first user, allows voice commands from a second user to be accepted may be advantageously provided. This makes it possible to pass on operator-control authority in a way corresponding to the passing on of a remote control unit from a first user to a second user.
It may be particularly advantageous for an identification of the various users to take place in order to perform an analysis of the viewing habits and create user profiles of the various users from this analysis.
A user profile obtained in this way is preferably used in a buffer storage of television programs in order to permit separate buffer storage of preferred programs for different users.
Similarly, the user profile may be used to make proposals for programs to be viewed, suited to the viewing habits of the various users.
Exemplary embodiments of the invention are described on the basis of the drawings, in which
The sequence of a first exemplary embodiment is schematically represented in
Firstly, in a first method step 1, the sound signals are converted into electric signals, to produce an analogue voice signal, which in turn is converted into a digital voice signal.
Then, in a next method step 2, first features, which are as typical as possible of the individual sounds of the speech and are robust with respect to disturbances and variations in pronunciation, are obtained from the digitized acoustic signal. Similarly, in method step 3, second features, which permit a characterization of the voice of the respective user and are used for distinguishing between the speech inputs of various users, are extracted from the digitized acoustic signal. In the exemplary embodiment presented, this extraction of features takes place separately for the speech recognition unit and the speaker recognition unit, but may also take place jointly.
On the basis of the first features, the actual speech recognition then takes place in method step 4. In method step 5, a speaker recognition is carried out with the aid of the second features, in order to identify the user speaking at the time. Similarly, however, only the second features may be stored, to allow differentiation from other users without an identification of the individual users taking place.
In method step 6, it is then checked whether the television has already been switched on. If this is the case, method steps 7 and 8 are executed, otherwise method steps 9 and 10. In the event that the television has not yet been switched on, it is next checked in method step 9 whether a switch-on command, such as for example “on” or “television on” has been given. If this is the case, in method step 10 the television is switched on and the user from whom the input originates is noted. If, instead of an identification, only a distintion between different users takes place, the second features, which characterize the current user, are correspondingly stored. Subsequently, in a way similar to that for the case in which no switch-on command had been given in method step 9, a return is made to method step 1.
In the case of an already switched-on television, method step 6 is followed by method step 7. In this step, it is checked whether the speech input was by the user already previously noted in method step 10. If this is the case, the input command for controlling the voice-controlled system is used in method step 8, for example for menu control or navigation. Subsequently, in a way similar to that for the case in which a change among the users was established in method step 7, a return is made to method step 1.
Various modifications of this exemplary embodiment are conceivable. For instance, a speech input for switching off the device may also be accepted from any user. Similarly, an operator-control command which, when input by the first user, also allows speech inputs of a second user or further users to be accepted in future may be provided.
The sequence of a second exemplary embodiment is schematically represented in
Method steps 1 to 5 coincide here to the exemplary embodiment from
The user profiles determined by the use of the speech recognition can be used in particular in the buffer storage of TV programs on hard disks or similar storage media which are provided in televisions and settop boxes. The accuracy of the analysis of the viewing habits is significantly increased in this case by the recognition of the respective user. For the example of a family in which the children spend significantly more time in front of the television than the parents, the hard disk is therefore no longer filled only with children's programs. Rather, the additional speaker recognition allows the viewing habit analysis to be created separately for a number of members of the family. The limited buffer memory space of the hard disk can then be divided among the individual users in accordance with a specific key, so that each user is given his predetermined share of buffer-stored television programs.
Similarly, user profiles determined by the use of speech recognition can also be used for the recording of radio programs or other transmitted data.
For the detection of the voice signals, a single microphone or else a microphone array comprising two or more microphones may be provided. The microphone array may, for example, be integrated in a television receiver. The microphones convert the detected sound signals into electric signals, which are amplified by amplifiers, converted by AD converters into digital signals and then fed to a signal processing unit. The latter can take into account the respective place where the user is located by a different scaling or processing of the detected sound signals. Furthermore, a correction of the microphone signals with respect to the sound signals emitted from the loudspeakers may also take place. The signal conditioned in this way is then fed to the speech recognition unit and speaker recognition unit, it being possible for algorithms or hardware units to be configured separately or else jointly. The commands determined and the identity of the user are then finally fed to a system manager for controlling the system.
The invention may be used for the voice remote control of a wide variety of devices appertaining to consumer electronics, such as for example TV sets, video recorders, DVD players, satellite receivers, combined TV-video systems, audio equipment or complete audio systems.
Number | Date | Country | Kind |
---|---|---|---|
100 46 561 | Sep 2000 | DE | national |
Number | Name | Date | Kind |
---|---|---|---|
4757525 | Matthews et al. | Jul 1988 | A |
4866777 | Mulla et al. | Sep 1989 | A |
5499288 | Hunt et al. | Mar 1996 | A |
5623539 | Bassenyemukasa et al. | Apr 1997 | A |
5752231 | Gammel et al. | May 1998 | A |
5774858 | Taubkin et al. | Jun 1998 | A |
5774859 | Houser et al. | Jun 1998 | A |
5835894 | Adcock et al. | Nov 1998 | A |
5897616 | Kanevsky et al. | Apr 1999 | A |
5907326 | Atkin et al. | May 1999 | A |
5915001 | Uppaluru | Jun 1999 | A |
5946653 | Campbell et al. | Aug 1999 | A |
6006175 | Holzrichter | Dec 1999 | A |
6167251 | Segal et al. | Dec 2000 | A |
6192255 | Lewis et al. | Feb 2001 | B1 |
6400996 | Hoffberg et al. | Jun 2002 | B1 |
6421453 | Kanevsky et al. | Jul 2002 | B1 |
6498970 | Colmenarez et al. | Dec 2002 | B1 |
6584439 | Geilhufe et al. | Jun 2003 | B1 |
6601762 | Piotrowski | Aug 2003 | B1 |
6654721 | Handelman | Nov 2003 | B1 |
6754629 | Qi et al. | Jun 2004 | B1 |
6785647 | Hutchison | Aug 2004 | B1 |
7047196 | Calderone et al. | May 2006 | B1 |
20030040917 | Fiedler | Feb 2003 | A1 |
20030046083 | Devinney et al. | Mar 2003 | A1 |
Number | Date | Country |
---|---|---|
WO 9518441 | Jul 1995 | WO |
Number | Date | Country | |
---|---|---|---|
20020035477 A1 | Mar 2002 | US |