The present invention relates to a wireless communication system comprising a transmission unit comprising a microphone arrangement having at least two spaced-apart microphones, a first ear unit to be worn at the right side of a user's head and a second ear unit to be worn at the left side of the user's head, with each ear unit comprising a receiver unit for receiving audio signals transmitted from the transmission unit via a wireless audio link and means for stimulating the user's hearing.
Usually in such systems—wherein the receiver unit usually is worn at ear-level—the wireless audio link is an FM radio link. The benefit of such systems is that sound captured by a remote microphone at the transmission unit can be presented at a high sound pressure level and good signal-to-noise ration (SNR) to the hearing of the user wearing the receiver unit at his ear(s).
According to one typical application of such wireless audio systems, the stimulating means is a loudspeaker which is part of the receiver unit or is connected thereto. Such systems are particularly helpful for being used in teaching e.g. (a) normal-hearing children suffering from auditory processing disorders (APD), (b) children suffering a unilateral loss (one deteriorated ear), or (c) children with a mild hearing loss, wherein the teacher's voice is captured by the microphone of the transmission unit, and the corresponding audio signals are transmitted to and reproduced by the receiver unit worn by the child, so that the teacher's voice can be heard by the child at an enhanced level, in particular with respect to the background noise level prevailing in the classroom. It is well known that presentation of the teacher's voice at such enhanced level supports the child in listening to the teacher.
According to another typical application of wireless audio systems the receiver unit is connected to or integrated into a hearing instrument, such as a hearing aid. The benefit of such systems is that the microphone of the hearing instrument can be supplemented with or replaced by the remote microphone which produces audio signals which are transmitted wirelessly to the FM receiver and thus to the hearing instrument. FM systems have been standard equipment for children with hearing loss (wearing hearing aids) and deaf children (implanted with a cochlear implant) in educational settings for many years.
Hearing impaired adults are also increasingly using FM systems. They typically use a sophisticated transmitter which can (a) be pointed to the audio-source of interest (during e.g. cocktail parties), (b) put on a table (e.g. in a restaurant or a business meeting), or (c) put around the neck of a partner/speaker and receivers that are connected to or integrated into the hearing aids. Some transmitters even have an integrated Bluetooth module giving the hearing impaired adult the possibility to connect wirelessly with devices such as cell phones, laptops etc.
The merit of wireless audio systems lies in the fact that a microphone placed a few inches from the mouth of a person speaking receives speech at a much higher level than one placed several feet away. This increase in speech level corresponds to an increase in signal-to-noise ratio (SNR) due to the direct wireless connection to the listener's amplification system. The resulting improvements of signal level and SNR in the listener's ear are recognized as the primary benefits of FM radio systems, as hearing-impaired individuals are at a significant disadvantage when processing signals with a poor acoustical SNR.
CA 2 422 449 A2 relates to a communication system comprising an FM receiver for a hearing aid, wherein audio signals may be transmitted from a plurality of transmitters via an analog FM audio link and wherein, in addition, the transmitters may transmit configuration parameters for adjusting the FM receiver via a separate digital control channel which may use FSK (Frequency Shift Keying) modulation.
EP 1 531 650 A2 relates to a communication system comprising a transmission unit having two spaced-apart microphones for generating an audio signal which is transmitted as a stereo audio signal via two wireless channels having different frequency to two hearing aids worn at a user's right ear and left ear, respectively, with one of the channels being received and reproduced by one of the hearing aids and with the other channel being received and reproduced by the other hearing aid. It is also mentioned that these two wireless transmission channels may be used for multi-language transmission.
DE 298 12 022 U1 relates to a wireless audio system comprising a body-worn transmission unit comprising a stereo microphone and a receiver device which could be a headset or a pair of earpieces. It is mentioned that means are provided for compensating the spatial distance between the microphone and the ear piece, however, with no example being given how such means could be realized. Further, it is stipulated as an object that such compensation means should be capable of providing the earpiece with electrical signals in such a manner that the spatial information is provided to the earpiece, with no solution to this object being described. WO 97/14268 relates to a binaural hearing aid system in which audio signals are exchanged between the two hearing aids and in which stereo signals are processed.
Usually in FM systems some audio signal processing, in particular acoustic beam forming, takes place in the transmission unit prior to transmitting the audio signals to the ear units, with typically the same (mono) signal being transmitted to both ear units. As a consequence, the user usually does not have any information as to where the FM audio signal is originating from, i.e. there is no spatial information comprised in the received audio signal. Further, the user usually has no possibility to influence the audio signal processing scheme applied in the transmission unit.
It is an object of the invention to provide for a wireless communication system which enables the presence of spatial information in the audio signals received at the ear units and which enables particularly flexible processing of the audio signals captured by the transmission unit. It is a further object to provide for a corresponding wireless communication method.
This object is achieved by a communication system as defined in claim 1 and a communication method as defined in claim 34, respectively. The invention is beneficial in that, by dedicating a separate audio channel to each microphone of the transmission unit and by receiving both of the audio signal channels at least one of the ear units and generating processed audio signals in at least one of the ear units taking into account the audio signals received via the two audio signal channels, on the one hand stereo signals including spatial information are available at the ear level and on the other hand audio signal processing can be individually influenced at the ear level, either manually or automatically. For example, switching between different modes of acoustic beam forming and other audio signal processing can be done by the user. Further, system complexity is reduced, since in most cases powerful digital signal processors (DSP) are present anyway at the ear level. By providing audio signal processing of the microphone signals of the transmission unit here, the DSP requirements for the transmission unit can be largely reduced.
Preferred embodiments of the invention are defined in the dependent claims. Examples of the invention will be described by reference to the attached drawings, wherein:
The hearing instrument 16 usually will be a hearing aid, such as of the BTE (Behind The Ear)-type, the ITE (In The Ear)-type or the CIC (Completely In the Canal)-type. Typically, the hearing instrument 16 comprises one or more microphones 20, a central unit 22 for performing audio signal processing and for controlling the hearing instrument 16, a power amplifier 24 and a loudspeaker 26.
The transmission unit 10 comprises at least two spaced-apart microphones M1 and M2 for capturing audio signals A1 and A2, respectively, which are supplied to an audio signal processing unit 28, which may generate pre-processed audio data B1 and B2, which are supplied to a transceiver unit 30 connected to an antenna 32 for transmitting the pre-processed audio data B1 and B2 as essentially separate audio channels via a wireless link 34 to each of the receiver units 14. The audio signal processing unit 28 is realized by a DSP.
The transmission unit 10 is designed in such a manner that a separate audio signal channel is dedicated to each of the microphones M1, M2, i.e. it includes at least a first audio channel and a second audio channel, with both channels being transmitted via the audio link 34 in such a manner that they are received separately at each of the receiver units 14.
Usually the wireless link 34 will be a radio frequency link, for example, an analog frequency modulation (FM) link or a digital link. In an analog FM link one of the side-bands carries one of the audio channels and the other side-band carries the other audio channel. If the link is digital it may use, for example, GFSK modulation. The digital link may use, for example, the 2.4 GHz ISM band including frequency hopping. In the digital case the distinction between the two audio channels is realized by a corresponding communication protocol by which data concerning the right channel are transmitted in data packets distinguished from the data packets of the left channel.
In some cases audio signal processing of the audio signals A1 and A2 in the transmission unit 10 may be restricted to the absolute minimum signal processing necessary for transmission via the audio link 34, such as data reduction, coding and decoding (codec), so that each of the audio signal channels of the link 34 corresponds to the audio signals as captured by one of the microphones M1, M2. In other cases there may be some pre-processing of the microphone signals A1 and A2 taking into account to some extent the audio signals A2, A1 captured by the other one of the microphones M2, M1. However, there will be no significant mixing of the two channels prior to transmission, i.e. one of the channels will transmit primarily the audio signals as captured by the microphone M1 and the other channel will transmit primarily audio signals as captured by the microphone M2.
Such essentially unmixed transmitted audio signals are considered as “raw” audio signals.
Each of the receiver units 14 comprises an antenna 36, a transceiver unit 38, a digital signal processing unit 40 and optionally one or more microphones (labeled M4 in the ear unit 12R and M5 in the ear unit 12L). Preferably, the hardware of both receiver units 14 is identical and it is decided by parameterization and/or software whether the respective transmission unit 14 belongs to the right ear unit 12R or to the left ear unit 12L, i.e. whether it will supply primarily the right audio signal channel or the left audio signal channel as the output D1 and D2, respectively to the loudspeaker 18 or the hearing instrument 16.
In the simplest case the link 34 is unidirectional, with the transceiver unit 30 serving as transmitter only and the transceiver unit 38 serving as a receiver only.
Each of the receiver units 14, i.e. each of the transceiver units 38, is capable of receiving both audio signal channels of the wireless link 34. Each transceiver unit 38 supplies the signals received via the first audio signal channel as audio signals C1 and the signal received via the second audio signal channel as audio signals C2 to the audio signal processing unit 40, i.e. the audio signals C1 correspond essentially to the (pre-processed) audio signals B1, and the audio signals C2 correspond essentially to the (pre-processed) audio signals B2. In the audio signal processing unit the received audio signals C1 and C2 will be processed in order to generate processed audio signals which take into account both the audio signals C1 and C2. For each of the transmission units 14 these processed audio signals are provided as a single channel output (in
If ear-level microphones M4 and M5 are provided, in most cases the audio signals captured by these microphones M4, M5 will be used as further input to the audio signal processing unit 40 and will be taken into account when generating the processed audio signals D1D2 from the received audio signals C1 and C2.
According to one embodiment the processed audio signals D1/D2 are supplied to the loudspeaker 18 for being reproduced to the respective ear of the user. According to another embodiment the processed audio signals D1/D2 are supplied as input to the hearing instrument 16. To this end, the processed audio signals D1/D2 may be supplied to a separate audio input of the audio signal processing unit 22, which receives also the audio signals captured by the microphones 20 of the hearing instrument 16. Alternatively, the processed audio signals D1/D2 may be supplied to an audio input which is connected in parallel to one of the microphones 20 of the hearing instrument 16 (see dashed lines in
The communication system may also comprise a remote control 42 for allowing manual control of the ear units 12R, 12L or the transmission unit 10 by the user. Such remote control 42 may comprise a first control element 44 and a second control element 46 to be operated manually by the user, a central unit 48, a transceiver 50 and an antenna 52 in order to transmit control commands via wireless link 54 to the receiver units 14 (control commands supplied to the processing units 40 are indicated by dashed lines) or the transmission unit 10. The link 54 may use the same channel as the link 34, in particular, if the link 34 is digital. Alternatively, the link 54 may be an inductive link, such as a FSK modulated link at 41 kHz or an OOK modulated link at 8 kHz, which, however, have a reach of only of the order of 1 m.
According to one application, the transmission unit 10 may be designed to be worn at the body of the user of the ear units 12R, 12L or it may be designed as a portable device which allows the user to hold it in his hand, to place it on a table in a meeting or to give it to another person for capturing this person's voice. A typical example for the latter case is the use of the transmission unit 10 by a teacher in a class of hearing-impaired students (in this case the transmission unit will be connected to the hearing instrument 16) or a class including APD-children (in this case the receiver units 14 will be provided with the loudspeaker 18). The user of the ear units may utilize the portable device for capturing sound signals, for example, the voice of a person speaking to the user, at a position other than the ear level. In such applications the remote control 42 might be integrated within the transmission unit 10.
In the following, examples of the audio signal processing modes performed by the audio signal processing units 40 will be described.
In the most simple case the two audio channels, i.e. the audio signals C1 and C2 received by the receiver unit 14, will remain essentially unmixed. The audio signal C1 (corresponding essentially to the audio signal A1 captured by the microphone MD will become the processed audio signal D1 provided by the audio signal processing unit 40 of the right ear unit 12R, and the audio signal C2 (corresponding essentially to the audio signals A2 captured by the microphone M2) will essentially become the processed audio signal D2 provided by the audio signal processing unit 40 of the left ear unit 12L. In this case the microphones M1 and M2 will function essentially as a wireless remote stereo microphone, with the channel A1 being essentially supplied to the user's right ear and the channel A2 being essentially provided to the user's left ear. Thereby the user is supplied with sound directionality information as captured at the location of the microphones M1, M2.
However, if such unprocessed stereo audio signal is reproduced to the user's ears, an audio source 56 will not be perceived at its actual location, but rather it will be perceived at a virtual location 56′ which corresponds to the actual location 56 shifted by the distance d between the microphones M1, M2 of the transmission unit 10 and the ear units 12R, 12L, see
In order to provide for a more natural spatial perception by the user, the audio signals may be compensated for the distance d by estimating the distance d, i.e. the distance between the microphones M1, M2 of the transmission unit 10 and one of the ear units 12R, 12L, and by taking into account this estimated distance d in the audio signal processing in the audio signal processing unit 40. More in detail, the phase and/or level differences between the audio signals A1 and A2 (which corresponds to the phase and/or level differences between the received audio signals C1 and C2) is adjusted according to the estimated value of the distance d. Such phase and/or level adjustment is achieved by introducing a corresponding time delay and/or level difference between the audio signals C1 and C2 in the audio signal processing unit 40 such that the processed audio signals D1 and D2 have a time delay and/or level difference relative to each other which corresponds to the adjusted phase and/or level difference.
The distance d can be estimated, in the most simple case, by manual selection of a corresponding value by the user of the ear units 12R, 12L, for example, by corresponding actuation of the control element 44 of the remote control 42, and/or the audio signal processing units 40 may use preset values which attributed to typical use cases and which may be activated automatically or manually. Such methods are appropriate only if the distance d usually is more or less constant for a relatively long time period and may change only between a few predictable values.
A more generally applicable approach is to determine the delay between the arrival time of a characteristic sound event at the microphones M1, M2 of the transmission unit 10 and the arrival time of the same sound event at the microphone M4 or M5 of the receiver unit 14 (in such calculation constant and additive time delays caused by the system architecture, such as the various signal processing elements 28, 40 and the transceivers 30, 38, have to be taken into account).
The delay between the arrival times may be determined by identifying a significantly characteristic sound event, for example, a strongly impulsive sound event, such as closing a door, placing a pencil on a table, a cough, turning over of pages of a book or journal, etc., and comparing the respective arrival times of the identified sound event. Alternatively or in addition, a correlation analysis may be performed on the received audio signals C1, C2 and the audio signals captured by the microphone M4/M5 at the ear level. It is to be noted that such distance estimation has to be done only from time to time, for example, every five seconds.
As already mentioned above, in most cases the audio signals B1, B2 transmitted by the transmission unit 10 (and hence audio signals C1, C2 received by the receiver units 14) will be “raw” audio signals. In this case the audio signal processing can be significantly influenced at the ear level, i.e. by corresponding audio signal processing in the audio signal processing units 40 of the ear units 12R, 12L. For example, it is enabled thereby to perform acoustic beam forming at the ear level by using the received audio signals C1, C2 as input to an acoustic beam forming algorithm. For example, the acoustic beam forming performed in the audio signal processing units 40 can be influenced by the user with regard to the angle, i.e. the direction, of the formed acoustic beam, the angular width of the formed acoustic beam and/or the degree or type of acoustic beam forming. To this end, the user may actuate the control elements 44 and 46 of the remote control 42 accordingly.
In
As illustrated in
The control element 46 for selecting the angle of the formed acoustic beam may comprise for example a joy stick, a circular touch-screen element, four or eight cursor buttons or a touch-sensitive watch glass with several segments.
According to an alternative embodiment the direction/angle of the acoustic beam 58 is controlled by the orientation of the user's head 66, preferably in such a manner that the beam 58 is automatically directed into the direction into which the user is presently looking.
To this end, at least one of the ear units 12R, 12L may be provided with a unit 68 for measuring the orientation of the respective ear unit—and hence the orientation of the user's head 66—in a plane essentially parallel to the floor. Similarly, the transmission unit 10 may be provided with a unit 70 for determining the angular orientation of the transmission unit 10—and hence the microphones' M1, M2 and M3 orientation—in the same plane, i.e. a plane substantially parallel to the floor. The units 68 and 70 may comprise a compass and/or a gyroscope. The absolute orientation of the transmission unit 10 on a plane substantially parallel to the floor is measured by the unit 70, and the result is transmitted via the wireless link 34 to the receiver units 14, where it is supplied to the audio signal processing units 40. The absolute orientation of the ear unit 12R or 12L on a plane essentially parallel to the floor is measured by the unit 68, and the result likewise is supplied to the audio signal processing units 40, in which the beam forming based on the audio signal channels C1, C2 and C3 is controlled according to the measured absolute orientations. To this end, the relative angular orientation between the respective unit 12R, 12L—and hence the user's head 66—and the transmission unit 10 is calculated from the absolute respective orientations measured by the units 68 and 70. Usually the angle of the beam 58 will be controlled by the audio signal processing units 40 in such a manner that it essentially equals the direction of the user's nose 74 in the plane in which the absolute angular orientations are measured by the units 68 and 70, i.e. in the plane essentially parallel to the floor.
Alternatively or in addition to acoustic beam forming other types of audio signal processing may be performed by the audio signal processing units 40 at the ear level, such as noise cancelling and frequency-dependent gain for improved speech recognition. The user may select, by operating the remote control 42 or by activating a suited user interface (not shown) on the receiver unit 14 or hearing instrument 16, the desired one of a plurality of audio signal processing modes.
In addition to the mentioned use for estimating the distance d between the transmission unit 10 and the ear units 12R, 12L, the audio signals captured by the microphones M4, M5 of the receiver units 14 may be taken into account in the generation of the processed audio signals D1 and D2, for example, in order to eliminate acoustic background noise in the audio signals C1, C2, C3 received from the transmission unit 10. If one combines the audio signals captured by the microphones M1, M2, M3 of the transmission unit 10 with the audio signals captured by the ear level microphones M4, M5, 20, one obtains a system of distributed signal sources which are spaced apart relatively far. Thereby very efficient noise cancelling may be obtained. If one assumes, for example, that the microphones M1, M2 of the transmission unit 10 are located substantially closer to the target signal source (e.g. a person talking to the user) than the ear-level microphones M4 or M5, the audio signals captured by M1, M2 comprise a larger proportion of the target signal, whereas the ear-level microphones M4, M5 essentially capture the background noise signal. By having exact knowledge of the background noise (i.e. the signals captured by M4, M5) this background noise may be removed very efficiently from the target signal (i.e. the signal captured by M1, M2).
According to a further embodiment the third channel of the transmission unit 10 may be used to transmit audio signals from an audio signal source 72 other than one of the microphones M1, M2 of the transmission unit 10, such as a music player, a mobile phone or a radio communication device, to at least one of the receiver units 14. These audio signals may be supplied to the transmission unit 10 via a cable connection or via a wireless link, such as a Bluetooth link. According to a modified embodiment, one of the stereo channels of the microphones M1, M2 may be used temporarily for such audio signal transmission. The user may choose or select one of the channels, for example by operating the remote control 42 accordingly. For example, the signals may be selected within the audio signal processing units 40 in such a manner that the audio signals from the remote audio signal source are supplied to one ear while the audio signals captured by the microphones M1, M2 of the transmission unit 10 are supplied to the other ear.
In the embodiments shown in
In
The receiver unit 14 may be connected to the hearing instrument 16 by an appropriate mechanical/electrical interface, such as an “audio shoe”, or it may be integrated together with the hearing instrument 16 in a common housing, as indicated by dashed lines in
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2007/001290 | 2/14/2007 | WO | 00 | 3/29/2010 |