The present application relates to a hearing device, e.g. a hearings aid, comprising a multitude of input transducers, e.g. microphones. The present disclosure specifically deals with matching of the multitude of input transducers to facilitate beamforming.
The application relates e.g. to a hearing device comprising a first part (e.g. a BTE part) containing at least one microphone (MBTE) and a second part (e.g. an ITE part), electrically connected (e.g. by a cable) to, but physically separate from, the first part, containing a receiver and/or at least one microphone (MITE). When the person wearing the hearing device is talking, the speech will be altered by the acoustic transfer function between the mouth and the microphone(s) as well as by the different characteristics of the microphone(s). The transfer function describing the difference between the transfer functions from a given look direction is given by the look vector d (also termed ‘steering vector’). As illustrated in
A Hearing Device:
In an aspect of the present application, a hearing device, e.g. a hearing aid, configured to be worn by a user is provided. The hearing device comprises first and second separate parts, the first part comprising a first input transducer providing a first electric input signal representative of sound in an environment of the user, and the second part comprising a second input transducer providing a second electric input signal representative of sound in the environment of the user, wherein the first and second parts are electrically connectable with each other via a wired or wireless connection. The hearing device further comprises,
The own voice transfer function (and the updated own voice transfer function) may e.g. be a relative transfer function.
Thereby an improved hearing device may be provided.
Instead of applying a (first) multiplication factor, the set of beamformer weights may simply be updated. This addresses a case, where the replacement of the first part (e.g. comprising an ITE microphone) has influenced the position of all microphones, e.g. if a connecting element between the first and second parts (e.g. a receiver cable length) has been changed.
The transfer function may be represented by a look vector d(k,m) in the form of an M-dimensional vector comprising elements (i=1, 2, . . . , M, M being the number of input transducers, e.g. microphones of the hearing device or system), the ith element di(k,m) defining a) an acoustic transfer function from the target signal source (e.g. a user's mouth) to the ith input transducer (e.g. a microphone), or b) the relative acoustic transfer function from the ith input transducer to a reference input transducer. The vector element di(k,m) is typically a complex number for a specific frequency (k) and time unit (m). The look vector d(k,m) may be estimated from the inter input transducer covariance matrix {circumflex over (R)}ss(k,m) based on the signals si(k,m), i=1, 2, . . . , M measured at the respective input transducers when the target or calibration signal (here a user's own voice) is active, cf. e.g. EP2882204B1 and EP2701145A1.
The term ‘decrease, such as minimize, a difference measure’ is in the present context taken to include the process of adapting the multiplication factor (α) to provide that the difference between the previously determined own voice transfer function and the updated own voice transfer function is decreased (e.g. minimized).
In an embodiment, the hearing device comprises a (first) BTE part adapted for being located at or behind an ear and a (second) ITE part adapted for being located at or in an ear canal of a user, each part comprising at least one microphone (e.g. MBTE and MITE, respectively, in
The first part may be constituted by or comprise an ITE part configured to be located at or in an ear canal of the user. The first part (e.g. an ITE-part) may contain more than one input transducer, e.g. microphones, e.g. two or more.
The second part may be constituted by or comprise a BTE part configured to be located at or behind an ear of the user. The second part (e.g. a BTE-part) may contain more than one input transducer, e.g. microphones, e.g. two or more. The second part, e.g. a BTE part, may contain or comprise two input transducers, e.g. microphones.
The hearing device may comprise a connecting element configured to electrically connect the first and second parts via one or more electrical conductors. The first part (e.g. an ITE-part) and the second part (e.g. a BTE-part) may be electrically connected to each other via respective mating connectors. The first and the second parts and/or the connecting element may be adapted to allow the first and second parts to be reversibly electrically connected to and disconnected from each other. As we have different receiver types (related to size of hearing loss or length of interconnecting element, e.g. cable), taking the receiver type into account while estimating the matching coefficient could help separate the microphone response differences from a difference due to e.g. a different receiver cable length. In an embodiment the type of microphone unit and/or cable length is communicated to the signal processing unit.
The hearing device may be configured to provide that the predefined trigger is activated by a power-on of the hearing device.
The hearing device may be configured to provide that the predefined trigger is activated when the first and second units are electrically connected after having been electrically disconnected.
The hearing device may be configured to provide that the predefined trigger is activated when the first and/or the second input transducers have been replaced. The hearing device may be configured to provide that the predefined trigger is activated when the first and/or the second parts have been replaced.
In an embodiment, the hearing device comprises a user interface. The user interface may be configured to allow a user to activate a calibration mode of the microphones, as proposed by the present disclosure. The user interface may be configured to allow a user to generate the predefined trigger, e.g. by indicating that the first and/or second parts have/has been replaced.
The hearing device may be configured to provide that re-matching of a replaced first or second input transducer is provided by replacing a previously used own voice look vector d stored in the memory, by an updated own voice look vector d′, where the updated own voice look vector d′ is determined by applying a, generally complex-valued, frequency-dependent scaling factor to the electric input signal of the replaced first or second input transducer such that the squared difference ∥d−α1d′∥2 is decreased, e.g. minimized. It emphasized that we only scale elements of the normalized look vector, which are different from 1. The re-matching of input transducers of the hearing device may be performed in a particular calibration mode of operation of the hearing device, where the user is instructed to activate his or her own voice, e.g. to speak a certain number of sentences or to speak for a certain time period (cf. e.g.
The hearing device may comprise an own voice detector for estimating whether or not, or with what probability, a given input sound originates from the voice of the user of the hearing device.
The hearing device may be constituted by or comprise a hearing aid, a headset, an earphone, an ear protection device or a combination thereof.
In an embodiment, the hearing device is adapted to provide a frequency dependent gain and/or a level dependent compression and/or a transposition (with or without frequency compression) of one or more frequency ranges to one or more other frequency ranges, e.g. to compensate for a hearing impairment of a user. In an embodiment, the hearing device comprises a signal processor for enhancing the input signals and providing a processed output signal.
In an embodiment, the hearing device comprises an output unit for providing a stimulus perceived by the user as an acoustic signal based on a processed electric signal. In an embodiment, the output unit comprises a number of electrodes of a cochlear implant or a vibrator of a bone conducting hearing device. In an embodiment, the output unit comprises an output transducer. In an embodiment, the output transducer comprises a receiver (loudspeaker) for providing the stimulus as an acoustic signal to the user. In an embodiment, the output transducer comprises a vibrator for providing the stimulus as mechanical vibration of a skull bone to the user (e.g. in a bone-attached or bone-anchored hearing device).
In an embodiment, the first and second input transducers comprises first and second microphones, respectively. Each microphone is conjured to convert an input sound to an electric input signal.
In an embodiment, the first and/or second parts comprises a wireless receiver for receiving a wireless signal comprising sound and for providing an electric input signal representing said sound.
In an embodiment, the hearing device comprises a directional microphone system adapted to spatially filter sounds from the environment, and thereby enhance a target acoustic source among a multitude of acoustic sources in the local environment of the user wearing the hearing device. In an embodiment, the directional system is adapted to detect (such as adaptively detect) from which direction a particular part of the microphone signal originates. This can be achieved in various different ways as e.g. described in the prior art. In hearing devices, a microphone array beamformer is often used for spatially attenuating background noise sources. Many beamformer variants can be found in literature. The minimum variance distortionless response (MVDR) beamformer is widely used in microphone array signal processing. Ideally, the MVDR beamformer keeps the signals from the target direction (also referred to as the look direction) unchanged, while attenuating sound signals from other directions maximally. The generalized sidelobe canceller (GSC) structure is an equivalent representation of the MVDR beamformer offering computational and numerical advantages over a direct implementation in its original form.
In an embodiment, the hearing device comprises an antenna and transceiver circuitry (e.g. a wireless receiver) for wirelessly receiving a direct electric input signal from another device, e.g. from an entertainment device (e.g. a TV-set), a communication device, a wireless microphone, or another hearing device. In an embodiment, the direct electric input signal represents or comprises an audio signal and/or a control signal and/or an information signal.
In an embodiment, the hearing device is a portable device, e.g. a device comprising a local energy source, e.g. a battery, e.g. a rechargeable battery.
In an embodiment, the hearing device comprises a forward or signal path between an input unit (e.g. an input transducer, such as a microphone or a microphone system and/or direct electric input (e.g. a wireless receiver)) and an output unit, e.g. an output transducer. In an embodiment, the signal processor is located in the forward path. In an embodiment, the signal processor is adapted to provide a frequency dependent gain according to a user's particular needs. In an embodiment, the hearing device comprises an analysis path comprising functional components for analyzing the input signal (e.g. determining a level, a modulation, a type of signal, an acoustic feedback estimate, etc.). In an embodiment, some or all signal processing of the analysis path and/or the signal path is conducted in the frequency domain. In an embodiment, some or all signal processing of the analysis path and/or the signal path is conducted in the time domain.
In an embodiment, an analogue electric signal representing an acoustic signal is converted to a digital audio signal in an analogue-to-digital (AD) conversion process, where the analogue signal is sampled with a predefined sampling frequency or rate fs, fs being e.g. in the range from 8 kHz to 48 kHz (adapted to the particular needs of the application) to provide digital samples xn (or x[n]) at discrete points in time tn (or n), each audio sample representing the value of the acoustic signal at to by a predefined number Nb of bits, Nb being e.g. in the range from 1 to 48 bits, e.g. 24 bits. Each audio sample is hence quantized using Nb bits (resulting in 2Nb different possible values of the audio sample). A digital sample x has a length in time of 1/fs, e.g. 50 μs, for fs=20 kHz. In an embodiment, a number of audio samples are arranged in a time frame. In an embodiment, a time frame comprises 64 or 128 audio data samples. Other frame lengths may be used depending on the practical application.
In an embodiment, the hearing devices comprise an analogue-to-digital (AD) converter to digitize an analogue input (e.g. from an input transducer, such as a microphone) with a predefined sampling rate, e.g. 20 kHz. In an embodiment, the hearing devices comprise a digital-to-analogue (DA) converter to convert a digital signal to an analogue output signal, e.g. for being presented to a user via an output transducer.
In an embodiment, the hearing device, e.g. the microphone unit, and or the transceiver unit comprise(s) a TF-conversion unit for providing a time-frequency representation of an input signal. In an embodiment, the time-frequency representation comprises an array or map of corresponding complex or real values of the signal in question in a particular time and frequency range. In an embodiment, the TF conversion unit comprises a filter bank for filtering a (time varying) input signal and providing a number of (time varying) output signals each comprising a distinct frequency range of the input signal. In an embodiment, the TF conversion unit comprises a Fourier transformation unit for converting a time variant input signal to a (time variant) signal in the (time-)frequency domain. In an embodiment, the frequency range considered by the hearing device from a minimum frequency fmin to a maximum frequency fmax comprises a part of the typical human audible frequency range from 20 Hz to 20 kHz, e.g. a part of the range from 20 Hz to 12 kHz. Typically, a sample rate fs is larger than or equal to twice the maximum frequency fmax, fs≥2fmax. In an embodiment, a signal of the forward and/or analysis path of the hearing device is split into a number NI of frequency bands (e.g. of uniform width), where NI is e.g. larger than 5, such as larger than 10, such as larger than 50, such as larger than 100, such as larger than 500, at least some of which are processed individually. In an embodiment, the hearing device is/are adapted to process a signal of the forward and/or analysis path in a number NP of different frequency channels (NP≤NI). The frequency channels may be uniform or non-uniform in width (e.g. increasing in width with frequency), overlapping or non-overlapping.
In an embodiment, the hearing device comprises a number of detectors configured to provide status signals relating to a current physical environment of the hearing device (e.g. the current acoustic environment), and/or to a current state of the user wearing the hearing device, and/or to a current state or mode of operation of the hearing device. Alternatively or additionally, one or more detectors may form part of an external device in communication (e.g. wirelessly) with the hearing device. An external device may e.g. comprise another hearing device, a remote control, and audio delivery device, a telephone (e.g. a Smartphone), an external sensor, etc.
In an embodiment, one or more of the number of detectors operate(s) on the full band signal (time domain). In an embodiment, one or more of the number of detectors operate(s) on band split signals ((time-) frequency domain), e.g. in a limited number of frequency bands.
In an embodiment, the number of detectors comprises a level detector for estimating a current level of a signal of the forward path. In an embodiment, the predefined criterion comprises whether the current level of a signal of the forward path is above or below a given (L-)threshold value. In an embodiment, the level detector operates on the full band signal (time domain). In an embodiment, the level detector operates on band split signals ((time-) frequency domain).
In a particular embodiment, the hearing device comprises a voice detector (VD) for estimating whether or not (or with what probability) an input signal comprises a voice signal (at a given point in time). A voice signal is in the present context taken to include a speech signal from a human being. It may also include other forms of utterances generated by the human speech system (e.g. singing). In an embodiment, the voice detector unit is adapted to classify a current acoustic environment of the user as a VOICE or NO-VOICE environment. This has the advantage that time segments of the electric microphone signal comprising human utterances (e.g. speech) in the user's environment can be identified, and thus separated from time segments only (or mainly) comprising other sound sources (e.g. artificially generated noise). In an embodiment, the voice detector is adapted to detect as a VOICE also the user's own voice. Alternatively, the voice detector is adapted to exclude a user's own voice from the detection of a VOICE.
In an embodiment, the hearing device comprises an own voice detector for estimating whether or not (or with what probability) a given input sound (e.g. a voice, e.g. speech) originates from the voice of the user of the system. In an embodiment, a microphone system of the hearing device is adapted to be able to differentiate between a user's own voice and another person's voice and possibly from NON-voice sounds.
Detection of a user's own voice can be done in a number of different ways, see e.g. use of sensors, e.g. acceleration sensor, vibration sensor, etc. or using signals from microphones at both ears (binaural detection, cf. e.g. US2006262944A1), determining a direct-to-reverberant ratio between the signal energy of a direct sound part and that of a reverberant sound part of an input sound signal (cf. e.g. US2008189107A1). The detection of a user's own voice is preferably independent of the parameter(s) (e.g. αITE, αBTE, cf. e.g.
In an embodiment, the number of detectors comprises a movement detector, e.g. an acceleration sensor. In an embodiment, the movement detector is configured to detect movement of the user's facial muscles and/or bones, e.g. due to speech or chewing (e.g. jaw movement) and to provide a detector signal indicative thereof.
In an embodiment, the hearing device comprises a classification unit configured to classify the current situation based on input signals from (at least some of) the detectors, and possibly other inputs as well. In the present context ‘a current situation’ is taken to be defined by one or more of
a) the physical environment (e.g. including the current electromagnetic environment, e.g. the occurrence of electromagnetic signals (e.g. comprising audio and/or control signals) intended or not intended for reception by the hearing device, or other properties of the current environment than acoustic);
b) the current acoustic situation (input level, feedback, etc.), and
c) the current mode or state of the user (movement, temperature, cognitive load, etc.);
d) the current mode or state of the hearing device (program selected, time elapsed since last user interaction, etc.) and/or of another device in communication with the hearing device.
In an embodiment, the hearing device comprises an acoustic (and/or mechanical) feedback suppression system.
In an embodiment, the hearing device further comprises other relevant functionality for the application in question, e.g. compression, noise reduction, etc.
In an embodiment, the hearing device comprises a listening device, e.g. a hearing aid, e.g. a hearing instrument, e.g. a hearing instrument adapted for being located at the ear or fully or partially in the ear canal of a user, e.g. a headset, an earphone, an ear protection device or a combination thereof.
Use:
In an aspect, use of a hearing device as described above, in the ‘detailed description of embodiments’ and in the claims, is moreover provided. In an embodiment, use is provided in a system comprising audio processing. In an embodiment, use is provided in a system comprising one or more hearing aids (e.g. hearing instruments), headsets, ear phones, active ear protection systems, etc., e.g. in handsfree telephone systems, teleconferencing systems, public address systems, karaoke systems, classroom amplification systems, etc.
A method:
In an aspect, a method of matching input transducers of a hearing device, e.g. a hearing aid, configured to be worn by a user is furthermore provided by the present application. The hearing device comprises first and second separate parts, the first part comprising a first input transducer providing a first electric input signal representative of sound in an environment of the user, and the second part comprising a second input transducer providing a second electric input signal representative of sound in the environment of the user, wherein the first and second parts are electrically connectable with each other via a wired or wireless connection. The method comprises
It is intended that some or all of the structural features of the device described above, in the ‘detailed description of embodiments’ or in the claims can be combined with embodiments of the method, when appropriately substituted by a corresponding process and vice versa. Embodiments of the method have the same advantages as the corresponding devices.
In principle, we could update not only an own voice beamformer, but any beamformer, e.g a target cancelling beamformer. If the only difference between the old and new beamformer weights is the ITE microphone transfer function, this difference would apply to any beamformer.
The transfer function(s) may e.g. be represented by a corresponding look vector. The transfer functions may be relative transfer functions between the microphones of the hearing device. The look vector may comprise as its individual elements relative transfer functions of sound from the sound source to the respective input transducers of the hearing device (taking one of the input transducers as the reference). The own voice transfer function (and the updated own voice transfer function) may e.g. be a relative transfer function.
The predefined trigger may be generated via a user interface and/or by a signal from one or more sensors.
A Computer Readable Medium:
In an aspect, a tangible computer-readable medium storing a computer program comprising program code means for causing a data processing system to perform at least some (such as a majority or all) of the steps of the method described above, in the ‘detailed description of embodiments’ and in the claims, when said computer program is executed on the data processing system is furthermore provided by the present application.
By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. In addition to being stored on a tangible medium, the computer program can also be transmitted via a transmission medium such as a wired or wireless link or a network, e.g. the Internet, and loaded into a data processing system for being executed at a location different from that of the tangible medium.
A Computer Program:
A computer program (product) comprising instructions which, when the program is executed by a computer, cause the computer to carry out (steps of) the method described above, in the ‘detailed description of embodiments’ and in the claims is furthermore provided by the present application.
A Data Processing System:
In an aspect, a data processing system comprising a processor and program code means for causing the processor to perform at least some (such as a majority or all) of the steps of the method described above, in the ‘detailed description of embodiments’ and in the claims is furthermore provided by the present application.
A Hearing System:
In a further aspect, a hearing system comprising a hearing device as described above, in the ‘detailed description of embodiments’, and in the claims, AND an auxiliary device is moreover provided.
In an embodiment, the hearing system is adapted to establish a communication link between the hearing device and the auxiliary device to provide that information (e.g. control and status signals, possibly audio signals) can be exchanged or forwarded from one to the other.
In an embodiment, the hearing system comprises an auxiliary device, e.g. a remote control, a smartphone, or other portable or wearable electronic device, such as a smartwatch or the like.
In an embodiment, the auxiliary device is or comprises a remote control for controlling functionality and operation of the hearing device(s). In an embodiment, the function of a remote control is implemented in a SmartPhone, the SmartPhone possibly running an APP allowing to control the functionality of the audio processing device via the SmartPhone (the hearing device(s) comprising an appropriate wireless interface to the SmartPhone, e.g. based on Bluetooth or some other standardized or proprietary scheme).
In an embodiment, the auxiliary device is or comprises an audio gateway device adapted for receiving a multitude of audio signals (e.g. from an entertainment device, e.g. a TV or a music player, a telephone apparatus, e.g. a mobile telephone or a computer, e.g. a PC) and adapted for selecting and/or combining an appropriate one of the received audio signals (or combination of signals) for transmission to the hearing device.
In an embodiment, the auxiliary device is or comprises another hearing device. In an embodiment, the hearing system comprises two hearing devices adapted to implement a binaural hearing system, e.g. a binaural hearing aid system.
An APP:
In a further aspect, a non-transitory application, termed an APP, is furthermore provided by the present disclosure. The APP comprises executable instructions configured to be executed on an auxiliary device to implement a user interface for a hearing device or a hearing system described above in the ‘detailed description of embodiments’, and in the claims. In an embodiment, the APP is configured to run on cellular phone, e.g. a smartphone, or on another portable device allowing communication with said hearing device or said hearing system.
The user interface may be configured to allow the user to interact with the hearing device or system and control functionality of the device or system. The user interface may allow the user to activate a calibration mode (according to the present disclosure), to initiate a calibration procedure, and/or to terminate the calibration procedure, and possibly accept the results of the calibration.
In the present context, a ‘hearing device’ refers to a device, such as a hearing aid, e.g. a hearing instrument, or an active ear-protection device, or other audio processing device, which is adapted to improve, augment and/or protect the hearing capability of a user by receiving acoustic signals from the user's surroundings, generating corresponding audio signals, possibly modifying the audio signals and providing the possibly modified audio signals as audible signals to at least one of the user's ears. A ‘hearing device’ further refers to a device such as an earphone or a headset adapted to receive audio signals electronically, possibly modifying the audio signals and providing the possibly modified audio signals as audible signals to at least one of the user's ears. Such audible signals may e.g. be provided in the form of acoustic signals radiated into the user's outer ears, acoustic signals transferred as mechanical vibrations to the user's inner ears through the bone structure of the user's head and/or through parts of the middle ear as well as electric signals transferred directly or indirectly to the cochlear nerve of the user.
The hearing device may be configured to be worn in any known way, e.g. as a unit arranged behind the ear with a tube leading radiated acoustic signals into the ear canal or with an output transducer, e.g. a loudspeaker, arranged close to or in the ear canal, as a unit entirely or partly arranged in the pinna and/or in the ear canal, as a unit, e.g. a vibrator, attached to a fixture implanted into the skull bone, as an attachable, or entirely or partly implanted, unit, etc. The hearing device may comprise a single unit or several units communicating electronically with each other. The loudspeaker may be arranged in a housing together with other components of the hearing device, or may be an external unit in itself (possibly in combination with a flexible guiding element, e.g. a dome-like element).
More generally, a hearing device comprises an input transducer for receiving an acoustic signal from a user's surroundings and providing a corresponding input audio signal and/or a receiver for electronically (i.e. wired or wirelessly) receiving an input audio signal, a (typically configurable) signal processing circuit (e.g. a signal processor, e.g. comprising a configurable (programmable) processor, e.g. a digital signal processor) for processing the input audio signal and an output unit for providing an audible signal to the user in dependence on the processed audio signal. The signal processor may be adapted to process the input signal in the time domain or in a number of frequency bands. In some hearing devices, an amplifier and/or compressor may constitute the signal processing circuit. The signal processing circuit typically comprises one or more (integrated or separate) memory elements for executing programs and/or for storing parameters used (or potentially used) in the processing and/or for storing information relevant for the function of the hearing device and/or for storing information (e.g. processed information, e.g. provided by the signal processing circuit), e.g. for use in connection with an interface to a user and/or an interface to a programming device. In some hearing devices, the output unit may comprise an output transducer, such as e.g. a loudspeaker for providing an air-borne acoustic signal or a vibrator for providing a structure-borne or liquid-borne acoustic signal. In some hearing devices, the output unit may comprise one or more output electrodes for providing electric signals (e.g. a multi-electrode array for electrically stimulating the cochlear nerve).
In some hearing devices, the vibrator may be adapted to provide a structure-borne acoustic signal transcutaneously or percutaneously to the skull bone. In some hearing devices, the vibrator may be implanted in the middle ear and/or in the inner ear. In some hearing devices, the vibrator may be adapted to provide a structure-borne acoustic signal to a middle-ear bone and/or to the cochlea. In some hearing devices, the vibrator may be adapted to provide a liquid-borne acoustic signal to the cochlear liquid, e.g. through the oval window. In some hearing devices, the output electrodes may be implanted in the cochlea or on the inside of the skull bone and may be adapted to provide the electric signals to the hair cells of the cochlea, to one or more hearing nerves, to the auditory brainstem, to the auditory midbrain, to the auditory cortex and/or to other parts of the cerebral cortex.
A hearing device, e.g. a hearing aid, may be adapted to a particular user's needs, e.g. a hearing impairment. A configurable signal processing circuit of the hearing device may be adapted to apply a frequency and level dependent compressive amplification of an input signal. A customized frequency and level dependent gain (amplification or compression) may be determined in a fitting process by a fitting system based on a user's hearing data, e.g. an audiogram, using a fitting rationale (e.g. adapted to speech). The frequency and level dependent gain may e.g. be embodied in processing parameters, e.g. uploaded to the hearing device via an interface to a programming device (fitting system), and used by a processing algorithm executed by the configurable signal processing circuit of the hearing device.
A ‘hearing system’ refers to a system comprising one or two hearing devices, and a ‘binaural hearing system’ refers to a system comprising two hearing devices and being adapted to cooperatively provide audible signals to both of the user's ears. Hearing systems or binaural hearing systems may further comprise one or more ‘auxiliary devices’, which communicate with the hearing device(s) and affect and/or benefit from the function of the hearing device(s). Auxiliary devices may be e.g. remote controls, audio gateway devices, mobile phones (e.g. SmartPhones), or music players. Hearing devices, hearing systems or binaural hearing systems may e.g. be used for compensating for a hearing-impaired person's loss of hearing capability, augmenting or protecting a normal-hearing person's hearing capability and/or conveying electronic audio signals to a person. Hearing devices or hearing systems may e.g. form part of or interact with public-address systems, active ear protection systems, handsfree telephone systems, car audio systems, entertainment (e.g. karaoke) systems, teleconferencing systems, classroom amplification systems, etc.
Embodiments of the disclosure may e.g. be useful in applications such as hearing aids and hearing aid systems, e.g. binaural hearing aid systems, in particular such hearing ads or hearing aid systems that comprises at least two separate parts each comprising an input transducer.
The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.
Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.
The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
The electronic hardware may include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
The present application relates to the field of hearing devices, e.g. hearing aids.
This invention addresses a hearing device comprising a behind the ear (BTE) part with at least one microphone as well as an in the ear (ITE) part containing a receiver (loudspeaker) and/or at least one microphone. The ITE part may be connected to the BTE part by a connecting element, e.g. comprising a cable (e.g. including wire(s)), or the two parts may alternatively be wirelessly connected. We envision the situation where the ITE part may be physically disconnected from the BTE part, e.g. for repair or replacement in the situation that the ITE part does not work anymore.
In hearing instruments which have more than one microphone, the amplitude and/or phase characteristics typically have to be matched in order to achieve a proper directional gain in any beamforming/spatial filtering signal processing algorithm. A solution for matching the phase and/or estimating the microphone distance has previously been proposed (see e.g. US20170078805A1). Pre-matching of the microphone's amplitudes is usually done during production of the instrument. However, as the microphone in the BTE part and the microphone in the ITE part are in separate parts, matching during production of the instrument requires that the BTE part and the ITE part are paired. Even if the BTE part and the ITE part were matched in advance—which would be an expensive solution in case the BTE part has 2 microphones, because then the hearing aid consisting of a BTE and an ITE part would require 3 matched microphones—there is still an issue if the ITE part at a later stage has to be replaced: the microphone in the replaced ITE part does not match the microphone(s) in the BTE part, or the BTE part may be located at a different place due to a different wire length between the BTE and ITE parts. The present application addresses how to match the microphones in the case, where the ITE part (or the BTE-part, or at least one or the microphones of the BTE- or ITE parts) has been replaced. Detection and correction for non-intended orientation of the microphone axis of a BTE-part comprising two microphones has been dealt with in US20150230036A1.
where we in the latter expression have assumed that the second microphone (MBTE) is a reference microphone, so that the individual elements of the look vector d are normalized with the transfer function H2 from the audio source (mouth) to the second microphone (MBTE) (hence the ‘H1/H2’ and ‘1’ for the first and second elements in the expression for the look vector d). We imagine that d is estimated for each frequency channel when hearing aid(s) is (are) mounted on the ears of the hearing aid user and using the hearing aid user's own voice as sound source. This could happen during a fitting session with a hearing care professional (HCP), who runs a calibration routine where the look vector d is estimated as the hearing aid user speaks a test sentence. Afterwards the estimated d values are stored as reference values (d,ref) in a memory of the hearing aid(s). The shown normalization (relative to H2) is just one example. We may as well select other normalizations, e.g. normalize with respect to H1 or normalize such that the length of d equals 1. In the following, the ‘look vector’ is termed d and the element of the look vector (for a two microphone case) are termed d11 and d21 such that d=[d11, d21]T. In the case of a normalized look vector, we may just refer to the non-unit element as d, i.e. d=[1, d]T or d=[d, 1]T.
An advantage of using the (second) BTE-microphone as a reference microphone is that it is less likely to be exchanged during the lifetime of the hearing aid than the (first) ITE microphone.
The frequency (f) dependent transfer functions H1 and H2 for sound from the user's mouth to respective first and second electric input signals can be considered as comprising a part Hac(f) accounting for the acoustic propagation path and a part accounting for the microphone characteristics Hmic(f). Hac(f) represents the acoustic propagation from a sound source to a reference microphone. Hereby Hj=Hac·Hj,mic, where Hj,mic=1 for j=ref. In this framework, the first and second transfer functions H1 and H2 can be written as Hj=Hj,ac·Hj,mic, where j is a microphone index, here j=1, 2.
It should be mentioned that the hearing device may comprise more than two input transducers (e.g. microphones), e.g. located in respective BTE and ITE-parts, or elsewhere on the user's body.
Now, if, for example, the ITE part (or the ITE-microphone) is replaced by another ITE part (or another ITE-microphone) (
Re-matching of the replaced microphone—and hence restoration of the beamformer performance—may be achieved as follows. Recall that the reference look vector d, which was estimated during the person's own voice (with microphone MITE before the ITE part is replaced), is stored in the memory of the hearing aid. We may estimate the characteristics of the changed ITE microphone (M′ITE) by decreasing, e.g. minimizing, the difference between the look vector estimated during the person's own voice d′ using the changed microphone (M′ITE), and the reference look vector d stored in hearing aid memory. This could, e.g., be achieved by applying a, generally complex-valued, frequency-dependent scaling factor to the replaced ITE microphone output (cf. αITE in
In other words, a microphone matching function is applied to the new (first) microphone (M′ITE), which restores the mouth-to-microphone transfer function of the old (replaced) microphone (MITE). This method assumes that the replaced microphone (as well as the other microphones are located at the same position). Using the microphone output of the ITE part, matched in this way, restores (or, at least increases) the beamformer/spatial filter performance.
When wearing two hearing devices, and due to the symmetry of the head, the own voice look vectors d related to the left and right device, respectively, should not differ too much. In the case, where the ITE microphone at one instrument must be replaced, the look vector obtained at the opposite (matched) hearing device may be used as reference own voice look vector.
Assuming that dleft and dright were similar before an ITE microphone was replaced, due to similar locations, any difference between the left and right own voice transfer functions after an ITE microphone is replaced will be due to a different microphone response (if both ITE microphones are replaced simultaneously, this is not necessarily the case, though).
Ideally, the person's own voice estimate should be independent of the ITE microphones, as the microphones may be replaced, but an own voice estimate could e.g. depend on the BTE microphones on each ear and/or on characteristics of the person's voice. During telephone conversations, the microphone matching should not adapt if the phone is near the ear as reflections from the phone may change the estimated look vector.
The advantage of this scheme is that we may calibrate the hearing device seamlessly, without any cognitive load imposed on the hearing aid user, as the system is updated while the person is talking.
The method may also be applied for matching of regular hearing devices, given that a reference own voice look vector is available. If, e.g., we have recorded a personal own voice look vector dov,ref while the microphones are matched, the own voice look vector will change over time, in case the microphones responses changes. We can compensate for this change as we know how the ideal own voice transfer function looks like (dov,ref).
Adaptive beamforming in hearing instruments aims at cancelling unwanted noise under the constraint that sounds from the target direction is unaltered. An example of such an adaptive system is illustrated in
The present beamformer structure (Y=C1−βC2) has the advantage that the factor β responsible for noise reduction is only multiplied on the second (target-cancelling) beam pattern C2 (so that the signal received from the target direction is not affected by any value of β). This constraint of a Minimum Variance Distortionless Response (MVDR) beamformer is a built in feature of the generalized sidelobe canceller (GSC) structure.
is an adaptively determined, frequency dependent, complex parameter that minimizes the noise under the constraint that the signal from the target direction is unaltered, and where c is a constant. The determination of β is performed in unit ABF.
The adaptation factor β(k) is a weight applied to the target cancelling beamformer. Hereby, we can adapt β(k) knowing that the target direction is unaltered.
O=IN′BTE·W*o1+IN′ITE·W*o2, and C=IN′BTE·W*c1+IN′ITE·W*c2,
4B, and 4C each shows an exemplary hearing device according to the present disclosure. The hearing device (HD), e.g. a hearing aid, is of a particular style (sometimes termed receiver-in-the ear, or RITE, style) comprising a BTE-part (BTE) adapted for being located at or behind an ear of a user and an ITE-part (ITE) adapted for being located in or at an ear canal of a user's ear and comprising an output transducer (SPK), e.g. a receiver (loudspeaker). The BTE-part and the ITE-part are connected (e.g. electrically connected) by a connecting element (IC) and internal wiring in the ITE- and BTE-parts (cf. e.g. schematically illustrated as wiring Wx in the BTE-part). The BTE- and ITE-parts each comprise an input transducer, e.g. a microphone (MBTE and MITE), respectively, which are used to pick up sounds from the environment of a user wearing the hearing device. In an embodiment, the ITE-part is relatively open allowing air to pass through and/or around it thereby minimizing the occlusion effect perceived by the user. In an embodiment, the ITE-part according to the present disclosure is less open than a typical RITE-style comprising only a loudspeaker (SPK) and a dome (DO) to position the loudspeaker in the ear canal (cf.
In the embodiments of a hearing device (HD) in
The hearing device (HD) comprises an output transducer (SPK) providing an enhanced output signal as stimuli perceivable by the user as sound based on an enhanced audio signal from the signal processor (DSP) or a signal derived therefrom. Alternatively or additionally, the enhanced audio signal from the signal processor (DSP) may be further processed and/or transmitted to another device depending on the specific application scenario.
In the embodiment of a hearing device in
In the scenario of
Each of the hearing devices (HD) exemplified in
In an embodiment, the hearing device (HD), e.g. a hearing aid (e.g. the processor (DSP)), is adapted to provide a frequency dependent gain and/or a level dependent compression and/or a transposition (with or without frequency compression) of one or frequency ranges to one or more other frequency ranges, e.g. to compensate for a hearing impairment of a user.
The hearing device of
The embodiment of a hearing device shown in
The embodiment, of
The embodiment of a hearing device shown in
In the embodiment of
The auxiliary device (AD) comprising the user interface (UI) is preferably adapted for being held in a hand of a user (U).
In an embodiment, the auxiliary device (AD) is or comprises a smartphone or similar device. In an embodiment, the auxiliary device (AD) is or comprises an audio gateway device adapted for receiving a multitude of audio signals (e.g. from an entertainment device, e.g. a TV or a music player, a telephone apparatus, e.g. a mobile telephone or a computer, e.g. a PC) and adapted for selecting and/or combining an appropriate one of the received audio signals (or combination of signals) for transmission to the hearing device. In an embodiment, the auxiliary device (AD) is or comprises a remote control for controlling functionality and operation of the hearing device(s). In an embodiment, the function of a remote control is implemented in a smartphone, the smartphone possibly running an APP allowing to control functionality of the audio processing device via the smartphone (the hearing device(s) comprising an appropriate wireless interface to the smartphone, e.g. based on Bluetooth or some other standardized or proprietary scheme).
The input unit (IU) of the embodiment of
In case only the ITE part has been replaced, and the BTE part is positioned at the same place, HBTE,ov,ref=H′BTE_ov.
The hearing system according to the present disclosure comprises a sensor integration device configured to be worn on the head of a user comprising a head worn carrier, here embodied in a spectacle frame.
The hearing system comprises left and right hearing devices and a number of sensors mounted on the spectacle frame. The hearing system (HS) comprises a number of sensors S1i, S2i, (i=1, . . . , NS) associated with (e.g. forming part of or connected to) left and right hearing devices (HD1, HD2), respectively. NS is the number of sensors located on each side of the frame (in the example of
The BTE- and ITE parts (BTE and ITE) of the hearing devices are electrically connected, either wirelessly or wired, as indicated by the dashed connection between them in
Estimating a Look Vector or Steering Vector d:
In the case, where only the target sound is present, a recorded sound at the microphones (e.g. MBTE and MITE in
where
is the transfer functions between the position of the source s and the microphones.
In the frequency domain, for each frequency channel k and time index m we have
Omitting the frequency index k, we may estimate a covariance matrix as
Where N is a time index (e.g. time frame index). The covariance matrix may as well be estimated recursively. If the sound from the look direction is the only sound, the covariance matrix is given by C=HHH, where H is the vector given by
but the time and frequency indices are omitted (actually, H does not change over time), and the steering vector is proportional to any of the columns of H, e.g. the normalized steering vector becomes
If noise is present, but known, the procedure described in EP3300078A1 can be applied. Alternatively, normalization is performed w.r.t the second element:
This normalization is more appropriate if the first microphone has been replaced.
In the previous examples (except
It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.
As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise.
It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
The claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.
Accordingly, the scope should be judged in terms of the claims that follow.
Number | Date | Country | Kind |
---|---|---|---|
18179506 | Jun 2018 | EP | regional |
Number | Name | Date | Kind |
---|---|---|---|
20140314258 | Kamkar Parsi et al. | Oct 2014 | A1 |
20180054683 | Pedersen | Feb 2018 | A1 |
20180262847 | Pedersen | Sep 2018 | A1 |
Number | Date | Country |
---|---|---|
2 040 486 | Mar 2009 | EP |
2 040 486 | Mar 2009 | EP |
2 793 488 | Oct 2014 | EP |
6-27993 | Feb 1994 | JP |
Number | Date | Country | |
---|---|---|---|
20190394577 A1 | Dec 2019 | US |