A variety of audio and/or hearing devices exist that provide a user with audio from an electronic device, such as a cell phone, provide a user with enhanced sounds and speech, such as a medical hearing aid, and/or provide a user with active noise control and/or noise cancellation. Many of these audio and hearing devices are wireless, such as wireless “ear buds.” In conventional wireless ear buds, however, each earpiece operates separately and independent from the other to perform active noise control and/or noise cancellation. Therefore, they cannot effectively utilize conventional speech enhancement methods and techniques.
Various embodiments of the present technology comprise a method and system for wireless audio. In various embodiments, the system comprises a set of wirelessly connected ear buds, each ear bud suitable for placing in a human ear canal. Each ear bud comprises a microphone, an asynchronous sampling rate converter, a timer, and an audio clock. One ear bud from the set further comprises a control circuit and a synchronizer to synchronize the input of sound signals captured by the microphones and/or synchronize the processing and output of the sound signals.
A more complete understanding of the present technology may be derived by referring to the detailed description when considered in connection with the following illustrative figures. In the following figures, like reference numbers refer to similar elements and steps throughout the figures.
The present technology may be described in terms of functional block components and various processing steps. Such functional blocks may be realized by any number of components configured to perform the specified functions and achieve the various results. For example, the present technology may employ various clocks, timers, buffers, analog-to-digital converters, microphones, asynchronous sampling rate converters, which may carry out a variety of functions. In addition, the present technology may be practiced in conjunction with any number of audio systems, such as medical hearing aids, audio earpieces (i.e., ear buds) and the like, and the systems described are merely exemplary applications for the technology. Further, the present technology may employ any number of conventional techniques for exchanging data, (either wirelessly or electrically), providing speech enhancement, attenuating desired frequencies, and the like.
Methods and systems for wireless audio according to various aspects of the present technology may operate in conjunction with any suitable electronic system and/or device, such as “smart devices,” wearables, consumer electronics, portable devices, audio players, and the like.
Referring to
The audio system 100 may be further configured for selective operation of the audio system 110 by the user. For example, the audio system 100 may have a manual control (not shown) that allows the user to set the operation of the audio system 100 to a desired mode. For example, the audio system 100 may comprise a listening mode, an ambient mode, and a noise cancelling mode. The listening mode may be suitable for communicating with a person standing in front of the user. In the listening mode, all other sounds other than the person's speech are attenuated. The ambient mode may be suitable for providing safety and may attenuate human speech but amplify and/or pass other environmental sounds, such as car noise, train noise, and the like. The noise cancelling mode may be suitable for relaxation and may attenuate all noises. The noise cancelling mode may be activated at the same time as the audio system 100 is producing pre-recorded sound.
The audio system 100 may comprise any suitable device for manually controlling or otherwise setting the desired mode of operation. For example, the earpiece 145 and/or a communicatively coupled electronic device, such as a cell phone, may comprise a switch, dial, button, and the like, to allow the user to manually control the mode of operation.
According to various embodiments, the audio system 100 may further employ any suitable method or technique for transmitting/receiving data, such as through a wireless communication system. For example, the audio system 100 may employ a wireless communication between a master device and a slave device, such as a “Bluetooth” communication system, or through a near-filed magnetic induction communication system.
Each earpiece 145 provides various audio to the user. The set of earpieces 145(1), 145(2) operate in conjunction with each other and may be configured to synchronize with each other to provide the user with synchronized audio. The set of earpieces 145(1), 145(2) may be further configured to process sound, such as provide speech enhancement and attenuate desired frequencies. According to various embodiments, the set of earpieces 145(1), 145(2) are configured to detect sound and transmit sound.
According to various embodiments, each earpiece 145 is shaped to fit in or near a human ear canal. For example, a portion of the earpiece 145 may block the ear canal, or the earpiece 145 may be shaped to fit over the outer ear. According to an exemplary embodiment, the left and right earpieces 145(1), 145(2) communicate with each other via a wireless connection. According to various embodiments, the left and right earpieces 145(1), 145(2) may also communicate via a wireless connection with an electronic device, such as a cell phone.
Each earpiece 145 may comprise a microphone 105 to detect sound in the user's environment. For example, the left earpiece 145(1) comprises a first microphone 105(1) and the right earpiece 145(2) comprises a second microphone 105(2). The microphone 105 may be positioned on an area of the earpiece 145 that faces away from the ear canal to detect sounds in front of and/or around the user. The microphone 105 may comprise any device and/or circuit suitable for detecting a range of sound frequencies and generating an analog sound signal in response to the detected sound.
Each earpiece 145 may further comprise an analog-to-digital converter (ADC) 110 to convert an analog signal to a digital signal. For example, the left earpiece 145(1) comprises a first ADC 110(1) and the right earpiece 145(2) comprises a second ADC 110(2). The ADC 110 may be connected to the microphone 105 and configured to receive the analog sound signals from the microphone 105. For example, the first ADC 110(1) is connected to and receives sound signals from the first microphone 105(1) and the second ADC 110(2) is connected to and receives sound signals from the second microphone 105(2). The ADC 110 processes the analog sound signal from the microphone 105 and converts the analog sound signal to a digital sound signal. The ADC 110 may comprise any device and/or circuit suitable for converting an analog signal to a digital signal and may comprise any suitable ADC architecture.
Each earpiece 145 may comprise an asynchronous sampling rate converter (ASRC) 115 to change the sampling rate of a signal to obtain a new representation of the underlying signal. For example, the left earpiece 145(1) comprises a first ASRC 115(1) and the right earpiece 145(2) comprises a second ASRC 115(2). The ASRC 115 may be connected to an output terminal of the ADC 110 and configured to receive the digital sound signal. For example, the first ASRC 115(1) is connected to and receives digital sound signals from the first ADC 110(1) and the second ASRC 115(2) is connected to and receives digital sound signals from the second ADC 110(2). The ASRC 115 may comprise any device and/or circuit suitable for sampling and/or converting data at according to an asynchronous, time-varying rate. According to an exemplary embodiment, each ASRC 115 is electrically connected to the respective ADC 110. Alternative embodiments may, however, employ a wireless connection.
Each earpiece 145 may further comprise an input buffer 120 to receive and hold incoming data. For example, the left earpiece 145(1) comprises a first input buffer 120(1) and the right earpiece 145(2) comprises a second input buffer 120(2). The input buffer 120 may be connected to an output terminal of the ASRC 115. For example, the first input buffer 120(1) is connected to and receives and stores an output from the first ASRC 115(1) and the second input buffer 120(2) is connected to and receives and stores an output from the second ASRC 115(2). The input buffer 120 may comprise any memory device and/or circuit suitable for temporarily storing data.
According to an exemplary embodiment, each input buffer 120 is electrically connected to the respective ASRC 115. Alternative embodiments may, however, employ a wireless connection.
Each earpiece 145 may further comprise an audio clock 130 to generate a clock signal. In various embodiments, the ADC 110 receives and operates according to the clock signal. For example, the left earpiece 145(1) comprises a first audio clock 130(1) configured to transmit a first clock signal to the first ADC 110(1) and the right earpiece 145(2) comprises a second audio clock 130(2) configured to transmit a second clock signal to the second ADC 110(2). The audio clock 130 may comprise any suitable clock generator circuit.
According to an exemplary embodiment, the first and second audio clocks 130(1), 130(2) may be configured to operate at a predetermined frequency, for example 16 kHz. While each audio clock 130 is configured to operate at the same predetermined frequency, variations between the first and second audio clocks 130(1), 130(2) may create some slight differences in the frequency and/or put the two clocks 130(1), 130(2) out of phase from each other. Variations between the first and second audio clocks 130(1), 130(2) may be due to manufacturing differences, variations in the components, and the like.
According to an exemplary embodiment, each audio clock 130 is electrically connected to the respective ADC 110. Alternative embodiments may, however, employ a wireless connection.
Each earpiece 145 may further comprise a timer 140 to provide time delays, operate as an oscillator, and/or operate as a flip-flop element. In various embodiments, the ADC 110 receives and operates according to the timer 140 and in conjunction with the audio clock 130. For example, the left earpiece 145(1) comprises a first timer 140(1) configured to transmit a first timer signal to the first ADC 110(1) and the right earpiece 145(2) comprises a second timer 140(2) configured to transmit a second timer signal to the second ADC 110(2).
According to an exemplary embodiment, each timer 140 is electrically connected to the respective ADC 110. Alternative embodiments may, however, employ a wireless connection.
The audio system 100 may further comprise a control circuit 125 configured to generate and transmit various control signals to the ASRC 115 and the audio clock 130. For example, the control circuit 125 may be communicatively coupled to the first and second ASRCs 115(1), 115(2) and configured to generate and transmit an ASRC control signal to each ASRC substantially simultaneously. The control circuit 125 may be implemented in either the left earpiece 145(1) or the right earpiece 145(2). According to an exemplary embodiment, the control circuit 125 is implemented in the left earpiece 145(1) and therefore the ASRC control signal may reach the first ASRC 115(1) slightly sooner (e.g., 1 millisecond) than the second ASRC 115(2) due to the slightly longer distance that the signal must travel.
Similarly, the control circuit 125 may be configured to generate and transmit a clock control signal to the audio clock 130. For example, the control circuit 125 may be communicatively coupled to the first and second audio clocks 130(1), 130(2) and configured to transmit the clock control signal to each clock substantially simultaneously.
According to an exemplary embodiment where the control circuit 125 is implemented in the left earpiece 145(1), the control circuit 125 is electrically connected to the first input buffer 120(1), the first ASRC 115(1), and the first audio clock 130(1). Further, the control circuit 125 is wirelessly connected to the second input buffer 120(2), the second ASRC 115(2), and the second audio clock 130(2).
However, in an alternative embodiment, the control circuit 125 may be implemented in the right earpiece 145(2) and is electrically connected to second input buffer 120(2), the second ASRC 115(2), and the second audio clock 130(2). In the present embodiment, the control circuit 125 is wirelessly connected to the first input buffer 120(1), the first ASRC 115(1), and the first audio clock 130(1).
The audio system 100 may further comprise a synchronizer circuit 135 configured to synchronize a start time for operating the first and second ADCs 110(1), 110(2). For example, the synchronizer circuit 135 may generate a timer signal and transmit the timer signal to each of the first and second timers 140(1), 140(2) substantially simultaneously. The synchronizer circuit 135 may be implemented in either the left earpiece 145(1) or the right earpiece 145(2). According to an exemplary embodiment, the synchronizer circuit 135 is implemented in the left earpiece 145(1) and therefore the timer signal may reach the first timer 140(1) slightly sooner (e.g., 1 millisecond) than the second timer 140(2) due to the slightly longer distance that the signal must travel.
According to an exemplary embodiment where the synchronizer circuit 135 is implemented in the left earpiece 145(1), the synchronizer circuit 135 is electrically connected to the first timer 140(1) and wirelessly connected to the second timer 140(2). However, in an alternative embodiment, the synchronizer circuit 135 may be implemented in the right earpiece 145(2) and electrically connected to the second timer 140(2) and wirelessly connected to the first timer 140(1).
According to various embodiments, the control circuit 125 and the synchronizer circuit 135 operate in conjunction with each other to synchronize an operation start time for operating the first and second ADCs 110(1), 110(2), which in turn synchronizes the operation of the first and second ASRCs 115(1), 115(2) and the first and second input buffers 120(1), 120(2). Accordingly, the left and right earpieces 145(1), 145(2) are synchronized with each other and generate output signals, such as a left channel signal and right channel signal, simultaneously.
Referring to
According to an alternative communication method, and referring to
According to various embodiments, the audio system 100 may further comprise a signal processor 400 configured to process the sound data and generate the output signals, such as the left channel signal and the right channel signal, and transmit the output signals to a respective speaker 410. For example, the left earpiece 145(1) may further comprise a first speaker 410(1) to receive the left channel signal and the right earpiece 145(2) may further comprise a second speaker 410(2) to receive the right channel signal.
In one embodiment, and referring to
In an alternative embodiment, and referring to
According to various embodiments, the signal processor 400 may be configured to process the sound data according to the desired mode of operation, such as the listening mode, the ambient mode, and the noise cancelling mode. For example, the signal processor 400 may be configured perform multiple data processing methods to accommodate each mode of operation, since each mode of operation may require different signal processing methods.
The audio system 100 may be configured to distinguish the location of a sound source. For example, the audio system 100 may be able to determine if the sound is coming from a source that is located directly in front of the user (i.e., the sound source is located substantially the same distance from the first microphone 105(1) and the second microphone 105(2)). According to the present embodiment, the audio system 100 uses phase information and/or signal power from the first and second microphones 105(1), 105(2) to determine the location of the sound source. For example, the audio system 100 may be configured to compare the phase information from the first and second microphones 105(1), 105(2). In general, when the sound comes from a central location, the phase and power of the audio signals from the first and second microphones 105(1), 105(2) are substantially the same. However, when the sound comes from some other direction, the phase and power of the audio signals will differ. This method of signal processing may be referred to as “center channel focus” and may be utilized during listening mode.
According to an exemplary embodiment, and referring to
According to an exemplary embodiment, and referring to the left earpiece 145(1), the first FFT circuit 600 transforms the signal from right earpiece 145(2), via the second and third input buffers 120(2), 405(1), and the second FFT circuit 601 transforms the signal of the left earpiece 145(1) via the first input buffer 120(1). The first and second FFT circuits 600, 601 each output a transformed signal and transmit the transformed signal to the phase detector circuit 615. Each phase detector circuit 615 receives and analyzes data from the first and second microphones 105(1), 105(2), via the first and second FFT circuits 600, 601. Each phase detector 405 compares the phases of data from each microphone 105(1), 105(2), determines which frequency bins contains the sound from the central location, and attenuates the frequency bins that contain sound from non-central locations (locations outside the central location).
The center channel focus method may be implemented in conjunction with any suitable wireless communication system. For example, the center channel focus method may be implemented in conjunction with the Bluetooth wireless communication system and the NFMI wireless communication system.
According to various and/or alternative embodiments, the signal processor 400 may be further configured to perform other methods of speech enhancement and/or attenuation. For example, the audio system 100 and/or the signal processor 400 may be comprise various circuits and perform various signal processing methods to attenuate sound during the noise cancelling mode and the ambient mode.
In operation, and referring to
The synchronizer circuit 135 may then set the first timer 140(1) to a value equal to twice the average travel time Ttimer (i.e., timer_1=2*Ttimer) and set the second timer 140(2) to a value equal to the average travel time Ttimer (i.e., timer_2=Ttimer). The synchronizer circuit 135 then receives an acknowledgment signal Ack from the second timer 140(2) and determines a second travel time T2. The second travel time T2 is the time from release of the “send value of timer 2” signal to the time of receipt of the acknowledgment signal Ack. It may be desired that the second travel time T2 is equal to the value of the first timer 140(1) (i.e., T2=2*Ttimer). If the second travel time T2 is equal to the timer 1 value plus/minus a predetermined tolerance value Δ, then the timing is synchronized and the first and second timers 140(1), 140(2) activate operation of the first and second ADCs 110(1), 110(2), respectively. If the second travel time T2 is greater than the timer 1 value plus the predetermined tolerance value (T2>timer_1+Δ) or if the second travel time T2 is less than the timer 1 value minus the predetermined tolerance value (T2<timer_1−Δ), then the synchronizer circuit 135 rechecks the second travel time T2 value by sending a new “send value of timer 2” signal and waiting for a new acknowledgment signal to acquire a new second travel time. If the synchronizer circuit 135 rechecks the second travel time T2 and the new second travel time is still not within the predetermined tolerance within a predetermined number of cycles, then the synchronizer circuit 135 starts over and generates a new travel time value and new values for the first and second timers 140(1), 140(2) (e.g., timer_1, timer_2) according to the same process described above.
Referring again to
In the above equation, d1_cnt1 is the number of data samples from the first input buffer 120(1) at time N=1, d2_cnt1 is the number of data samples from the second input buffer 120(2) at time N=1, and d1_cnt2 is a number of data samples from the first input buffer 120(1) at time N=2. If the audio system 100 is synchronized, then the equation above holds true. However, if d2_cnt1 is not equal to the expression (d1_cnt1+d1_cnt2)/2, then the audio system 100 may adjust a conversion ratio of the first ASRC 115(1) or the second ASRC 115(2). Alternatively, the audio system 100 may adjust the frequency of the first audio clock 130(1) or the second audio clock 130(2).
For example, if d2_cnt1 is greater than the expression (d1_cnt1+d1_cnt2)/2, then the control circuit 125 may increase the conversion ratio of the first ASRC 115(1) or decrease the conversion ratio of the second ASRC 115(2). Alternatively, the control circuit 125 may increase the frequency of the first audio clock 130(1) or decrease the frequency of the second audio clock 130(2).
If d2_cnt1 is less than the expression (d1_cnt1+d1_cnt2)/2, then the control circuit 125 may decrease the conversion ratio of the first ASRC 115(1) or increase the conversion ratio of the second ASRC 115(2). Alternatively, the control circuit 125 may decrease the frequency of the first audio clock 130(1) or increase the frequency of the second audio clock 130(2).
The audio system 100 may then perform various speech enhancement processes, such as the center channel focus process described above, or provide other noise cancelling or noise attenuating processes based on the users desired operation mode, such as the noise cancelling mode or the ambient mode. The audio system 100 may be configured to continuously control the ASRC 115 and/or the audio clock 130 and update the signal processing methods as the user changes the mode of operation.
In the foregoing description, the technology has been described with reference to specific exemplary embodiments. The particular implementations shown and described are illustrative of the technology and its best mode and are not intended to otherwise limit the scope of the present technology in any way. Indeed, for the sake of brevity, conventional manufacturing, connection, preparation, and other functional aspects of the method and system may not be described in detail. Furthermore, the connecting lines shown in the various figures are intended to represent exemplary functional relationships and/or steps between the various elements. Many alternative or additional functional relationships or physical connections may be present in a practical system.
The technology has been described with reference to specific exemplary embodiments. Various modifications and changes, however, may be made without departing from the scope of the present technology. The description and figures are to be regarded in an illustrative manner, rather than a restrictive one and all such modifications are intended to be included within the scope of the present technology. Accordingly, the scope of the technology should be determined by the generic embodiments described and their legal equivalents rather than by merely the specific examples described above. For example, the steps recited in any method or process embodiment may be executed in any order, unless otherwise expressly specified, and are not limited to the explicit order presented in the specific examples. Additionally, the components and/or elements recited in any apparatus embodiment may be assembled or otherwise operationally configured in a variety of permutations to produce substantially the same result as the present technology and are accordingly not limited to the specific configuration recited in the specific examples.
Benefits, other advantages and solutions to problems have been described above with regard to particular embodiments. Any benefit, advantage, solution to problems or any element that may cause any particular benefit, advantage or solution to occur or to become more pronounced, however, is not to be construed as a critical, required or essential feature or component.
The terms “comprises”, “comprising”, or any variation thereof, are intended to reference a non-exclusive inclusion, such that a process, method, article, composition or apparatus that comprises a list of elements does not include only those elements recited, but may also include other elements not expressly listed or inherent to such process, method, article, composition or apparatus. Other combinations and/or modifications of the above-described structures, arrangements, applications, proportions, elements, materials or components used in the practice of the present technology, in addition to those not specifically recited, may be varied or otherwise particularly adapted to specific environments, manufacturing specifications, design parameters or other operating requirements without departing from the general principles of the same.
The present technology has been described above with reference to an exemplary embodiment. However, changes and modifications may be made to the exemplary embodiment without departing from the scope of the present technology. These and other changes or modifications are intended to be included within the scope of the present technology, as expressed in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
20040037442 | Nielsen | Feb 2004 | A1 |
20040136555 | Enzmann | Jul 2004 | A1 |
20080226094 | Rutschman | Sep 2008 | A1 |
20130293723 | Benson | Nov 2013 | A1 |
20140093085 | Jarvis | Apr 2014 | A1 |
20140143582 | Kindred | May 2014 | A1 |
20170098466 | Elliot | Apr 2017 | A1 |
20190261089 | Hariharan | Aug 2019 | A1 |