The disclosures herein relate in general to audio processing, and in particular to a method and system for configuring an active noise cancellation unit.
Conventionally, active noise cancellation (“ANC”) properties of an audio headset are configurable by manual operation of physical switches (e.g., push buttons) on the headset and/or by the headset's receiving of configuration information through a universal serial bus (“USB”). The physical switches are potentially cumbersome, inflexible and/or confusing to operate. The USB relies upon a separate USB cable, which is potentially inconvenient.
An active noise cancellation (“ANC”) unit receives audio signals from a user-operated device through a connection. In response to the audio signals, the ANC unit causes at least one speaker to generate sound waves. The ANC unit receives a set of parameters from the user-operated device through the connection. The connection is at least one of: an audio cable; and a wireless connection. The set of parameters represents a user-specified combination of ANC properties. The ANC unit automatically adapts itself to implement the set of parameters for substantially achieving the user-specified combination of ANC properties in operations of the ANC unit.
In the example of
As shown in
The system 100 operates in association with the user 212. In response to signals from the processor 202, the screen of the display device 210 displays visual images, which represent information, so that the user 212 is thereby enabled to view the visual images on the screen of the display device 210. In one embodiment, the display device 210 is a touchscreen (e.g., the touchscreen 102), such as: (a) a liquid crystal display (“LCD”) device; and (b) touch-sensitive circuitry of such LCD device, so that the touch-sensitive circuitry is integral with such LCD device. Accordingly, the user 212 operates the touchscreen 102 (e.g., virtual keys thereof, such as a virtual keyboard and/or virtual keypad) for specifying information (e.g., alphanumeric text information) to the processor 202, which receives such information from the touchscreen 102.
For example, the touchscreen 102: (a) detects presence and location of a physical touch (e.g., by a finger of the user 212, and/or by a passive stylus object) within a display area of the touchscreen 102; and (b) in response thereto, outputs signals (indicative of such detected presence and location) to the processor 202. In that manner, the user 212 can touch (e.g., single tap and/or double tap) the touchscreen 102 to: (a) select a portion (e.g., region) of a visual image that is then-currently displayed by the touchscreen 102; and/or (b) cause the touchscreen 102 to output various information to the processor 202.
In the example of
Similarly: (a) an error microphone 306 is located within the right ear region; and (b) a reference microphone 308 is located outside the right ear region (e.g., on an exterior side of the right earset of the headset 114). The error microphone 306: (a) converts, into signals, sound waves from the right ear region (e.g., including sound waves from the right speaker 112); and (b) outputs those signals. The reference microphone 308: (a) converts, into signals, sound waves from outside the right ear region (e.g., ambient noise around the reference microphone 308); and (b) outputs those signals. Accordingly, the signals from the error microphone 306 and the reference microphone 308 represent various sound waves (collectively “right sounds”).
Also, the headset 114 includes an active noise cancellation (“ANC”) unit 310. The ANC unit 310: (a) receives and processes the signals from the error microphone 302 and the reference microphone 304; and (b) in response thereto, outputs signals for causing the left speaker 110 to generate first additional sound waves that cancel at least some noise in the left sounds. Similarly, the ANC unit 310: (a) receives and processes the signals from the error microphone 306 and the reference microphone 308; and (b) in response thereto, outputs signals for causing the right speaker 112 to generate second additional sound waves that cancel at least some noise in the right sounds.
In one example, the ANC unit 310 optionally: (a) receives a left channel of the analog audio signals from the processor 202 (“left audio”) through the cable 108 and/or a wireless (e.g., BLUETOOTH) interface unit; and (b) combines the left audio into the signals that the ANC unit 310 outputs to the left speaker 110 (collectively “left speaker signals”). Accordingly, in this example: (a) the left speaker 110 generates the first additional sound waves to also represent the left audio's information (e.g., music and/or speech), which is audible to a left ear of the user 212; and (b) the ANC unit 310 suitably accounts for the left audio in its further processing (e.g., estimating noise) of the signals from the error microphone 302 for cancelling at least some noise in the left sounds.
Similarly, the ANC unit 310 optionally: (a) receives a right channel of the analog audio signals from the processor 202 (“right audio”) through the cable 108 and/or the wireless interface unit; and (b) combines the right audio into the signals that the ANC unit 310 outputs to the right speaker 112 (collectively “right speaker signals”). Accordingly, in this example: (a) the right speaker 112 generates the second additional sound waves to also represent the right audio's information (e.g., music and/or speech), which is audible to a right ear of the user 212; and (b) the ANC unit 310 suitably accounts for the right audio in its further processing (e.g., estimating noise) of the signals from the error microphone 306 for cancelling at least some noise in the right sounds.
As shown in
Accordingly, digital-to-analog converters (“DACs”) receive digital versions of the left speaker signals and the right speaker signals from the DSP. The DACs convert those digital versions into analog versions thereof, which the DACs output to an amplifier (“Amp”). The Amp: (a) receives and amplifies those analog versions from the DACs; and (b) outputs such amplified versions to the speakers 110 and 112.
Also, the ANC unit 310 includes a microcontroller (“MCU”) for configuring the DSP and various other components of the ANC unit 310. For clarity, although
By suitably operating the menu 402 through the display device 210 (e.g., by selecting from among predefined equalization profiles within the menu 402), the user 212 specifies its preferred equalization profile for sound waves from the speakers 110 and 112. Also, by suitably operating the menu 404 through the display device 210 (e.g., by selecting from among predefined ANC profiles within the menu 404), the user 212 specifies its preferred ANC profile for those sound waves. Further, by suitably operating the menu 406 through the display device 210 (e.g., by selecting from among predefined ANC effects within the menu 406), the user 212 specifies its preferred ANC effect(s) for those sound waves.
In response to a combination of those specifications by the user 212 (e.g., the user 212′s preferred equalization profile via the menu 402, combined with the user 212′s preferred ANC profile via the menu 404, combined with the user 212′s preferred ANC effect(s) via the menu 406), the processor 202 causes the window 408 to show an example graphical representation of how those sound waves could be affected by such combination. Accordingly, such combination is a user-specified combination of ANC properties, including the user-specified equalization profile, ANC profile and ANC effect(s). After the user 212 is satisfied with such combination of ANC properties, the user 212 informs the processor 202 of such fact by suitably operating (e.g., touching) the download button 410, as discussed hereinbelow in connection with
At a next step 504, the user 212 suitably operates the download button 410 (
At a next step 506, the processor 202 determines whether the headset 114 acknowledges its receipt of the initiate download message. In one example, the headset 114 outputs such acknowledgement to the processor 202 through the cable 108 and/or the wireless (e.g., BLUETOOTH) connection. In response to the processor 202 receiving such acknowledgement from the headset 114 within a predetermined window of time after the initiate download message, the operation continues from the step 506 to a step 508.
At the step 508, the processor 202 transmits such combination's respective set of component parameters to the headset 114 through the cable 108 and/or the wireless (e.g., BLUETOOTH) interface unit (
Referring again to the step 506, if the processor 202 does not receive the headset 114 acknowledgement within the predetermined window of time after the initiate download message, then the operation continues from the step 506 to a step 512. Similarly, if the processor 202 does not receive the headset 114 acknowledgement within a predetermined window of time after such transmission of those component parameters, then the operation continues from the step 510 to the step 512. At the step 512, the processor 202 executes a suitable error handler program, and the operation returns to the step 502.
In response to the headset 114 determining that it is not receiving an initiate download message from the processor 202, the operation returns from the step 604 to the step 602. Conversely, in response to the headset 114 determining that it is receiving an initiate download message from the processor 202, the operation continues from the step 604 to a step 606. At the step 606, the headset 114: (a) outputs an acknowledgement (acknowledging its receipt of the initiate download message) to the processor 202 through the cable 108 and/or the wireless (e.g., BLUETOOTH) connection; (b) receives a combination's respective set of component parameters (step 508 of
At a next step 608, in response to those component parameters, the headset 114 automatically adapts itself (e.g., configures software and/or hardware of its MCU, DSP and/or various other components of the ANC unit 310) to implement those component parameters for substantially achieving the user-specified combination of ANC properties in the headset 114 operations (discussed hereinabove in connection with
Accordingly, the processor 202 and the headset 114 communicate the following types of information to and from one another through the cable 108 and/or the wireless (e.g., BLUETOOTH) connection:
In one embodiment, all such information is communicated through the same connection, namely either: (a) the cable 108, which is a wired connection; or (b) the wireless (e.g., BLUETOOTH) connection. In such embodiment, the initiate download message, the component parameters, and the acknowledgements thereof (and information represented by such message, parameters and acknowledgements) are inaudible to ears of the user 212, even if the user 212 listens to the sound waves from the speakers 110 and 112, and even if the conventional audio signals (and/or information represented by those signals) are audible to such ears.
In one example, for inaudible communication through the cable 108 (e.g., a conventional three-conductor stereo cable), the transmitting device (e.g., processor 202 or headset 114) generates and outputs two types of inaudible tones, namely: (a) a clock tone through a first conductor of such cable; and (b) a data tone through a second conductor of such cable. With a sharp bandpass filter or a fast Fourier transform (“FFT”), the receiving device (e.g., headset 114 or processor 202) monitors magnitudes of those tones. In such monitoring, the receiving device applies a threshold to quantize each tone as being either a binary logic “1” signal or a binary logic “0” signal.
To start a particular communication, the transmitting device generates and outputs a first predefined sequence of tones for sending a header (e.g., preamble) of such communication to the receiving device. After such header, the transmitting device generates and outputs suitable tones for sending: (a) respective addresses of the transmitting and receiving devices; and (b) payload data of such communication to the receiving device. To end the particular communication, the transmitting device generates and outputs a second predefined sequence of tones for sending a footer of such communication to the receiving device. In this example, each byte has a 1-bit cyclic redundancy check (“CRC”). Accordingly, the processor 202 and the headset 114 are suitable for operating the audio cable 108 (and, similarly, operating the wireless connection) as a binary interface for ultrasonically communicating information with a serial communications protocol.
In the illustrative embodiments, a computer program product is an article of manufacture that has: (a) a computer-readable medium; and (b) a computer-readable program that is stored on such medium. Such program is processable by an instruction execution apparatus (e.g., system or device) for causing the apparatus to perform various operations discussed hereinabove (e.g., discussed in connection with a block diagram). For example, in response to processing (e.g., executing) such program's instructions, the apparatus (e.g., programmable information handling system) performs various operations discussed hereinabove. Accordingly, such operations are computer-implemented.
Such program (e.g., software, firmware, and/or microcode) is written in one or more programming languages, such as: an object-oriented programming language (e.g., C++); a procedural programming language (e.g., C); and/or any suitable combination thereof. In a first example, the computer-readable medium is a computer-readable storage medium. In a second example, the computer-readable medium is a computer-readable signal medium.
A computer-readable storage medium includes any system, device and/or other non-transitory tangible apparatus (e.g., electronic, magnetic, optical, electromagnetic, infrared, semiconductor, and/or any suitable combination thereof) that is suitable for storing a program, so that such program is processable by an instruction execution apparatus for causing the apparatus to perform various operations discussed hereinabove. Examples of a computer-readable storage medium include, but are not limited to: an electrical connection having one or more wires; a portable computer diskette; a hard disk; a random access memory (“RAM”); a read-only memory (“ROM”); an erasable programmable read-only memory (“EPROM” or flash memory); an optical fiber; a portable compact disc read-only memory (“CD-ROM”); an optical storage device; a magnetic storage device; and/or any suitable combination thereof.
A computer-readable signal medium includes any computer-readable medium (other than a computer-readable storage medium) that is suitable for communicating (e.g., propagating or transmitting) a program, so that such program is processable by an instruction execution apparatus for causing the apparatus to perform various operations discussed hereinabove. In one example, a computer-readable signal medium includes a data signal having computer-readable program code embodied therein (e.g., in baseband or as part of a carrier wave), which is communicated (e.g., electronically, electromagnetically, and/or optically) via wireline, wireless, optical fiber cable, and/or any suitable combination thereof.
Although illustrative embodiments have been shown and described by way of example, a wide range of alternative embodiments is possible within the scope of the foregoing disclosure.