The present embodiments relate generally to wireless devices, and specifically to reducing power consumption in wireless devices.
Wireless Personal Area Network (PAN) communications such as Bluetooth communications allow for short range wireless connections between two or more paired wireless devices (e.g., that have established a wireless communication channel or link). Many mobile devices such as cellular phones utilize wireless PAN communications to exchange data such as audio signals with wireless headsets. Because wireless headsets are typically powered by batteries that may be inconvenient to charge during use, it is desirable to minimize power consumption of such wireless headsets.
The present embodiments are illustrated by way of example and are not intended to be limited by the figures of the accompanying drawings, where:
The present embodiments are described below in the context of reducing power consumption in Bluetooth-enabled devices for simplicity only. It is to be understood that the present embodiments are equally applicable for reducing power consumption in devices that communicate with each other using signals of other various wireless standards or protocols used for Personal Area Networks (PANs). As used herein, the term “wireless communication medium” can include communications governed by the IEEE 802.11 standards, Bluetooth, HiperLAN (a set of wireless standards, comparable to the IEEE 802.11 standards, used primarily in Europe), and other technologies used in wireless communications. Further, the term “mobile device” refers to a wireless communication device capable of wirelessly exchanging data signals with another device, and the term “wireless headset” refers to a short-range wireless device capable of exchanging data signals with the mobile device (e.g., using Bluetooth communication protocols). The terms “wireless headset” and “headset” may be used herein interchangeably.
In the following description, numerous specific details are set forth such as examples of specific components, circuits, and processes to provide a thorough understanding of the present disclosure. The term “coupled” as used herein means connected directly to or connected through one or more intervening components or circuits. Also, in the following description and for purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the present embodiments. However, it will be apparent to one skilled in the art that these specific details may not be required to practice the present embodiments. In other instances, well-known circuits and devices are shown in block diagram form to avoid obscuring the present disclosure. Any of the signals provided over various buses described herein may be time-multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit elements or software blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be a single signal line, and each of the single signal lines may alternatively be buses, and a single line or bus might represent any one or more of a myriad of physical or logical mechanisms for communication between components.
Headset 120, which may be any suitable wireless headset (e.g., in-ear headsets, headphones, or other suitable paired device), includes a built-in speaker 122, a built-in microphone (MIC) 124, a processor 126, and a transceiver 128. Processor 126 is coupled to and may control the operation of speaker 122, microphone 124, and/or transceiver 128. Headset 120 facilitates the exchange of data signals (e.g., audio signals) between user 110 and mobile device 130. More specifically, headset speaker 122 outputs audio signals received from mobile device 130 to user 110, and headset microphone 124 detects and receives, as input, audio signals 125 generated by user 110 (e.g., voice data) for transmission to mobile device 130 (e.g., using transceiver 128). Transceiver 128 facilitates the exchange of audio signals A_IN and A_OUT between headset 120 and mobile device 130. Thus, for some embodiments, headset 120 receives audio signals 125 generated (e.g., spoken) by user 110 and transmits audio signals 125 as audio signals A_IN to mobile device 130, and headset 120 receives audio signals A_OUT (e.g., corresponding to voice data of another user) from mobile device 130 and outputs audio signals to user 110 via its speaker 122.
Mobile device 130, which may be any suitable mobile communication device (e.g., cellular phone, cordless phone, tablet computer, laptop, or other portable communication device), includes a built-in speaker 132, a built-in microphone 134, a processor 136, and a transceiver 138. Processor 136 is coupled to and may control the operation of speaker 132, microphone 134, and/or transceiver 138. More specifically, device speaker 132 outputs audio signals received by mobile device 130 from another user to user 110, and device microphone 134 detects and receives, as input, audio signals 135 generated (e.g., spoken) by user 110. Transceiver 138 facilitates the exchange of audio signals A_IN and A_OUT between headset 120 and mobile device 130. In addition, transceiver 138 may also facilitate the exchange of audio signals and/or other data signals between mobile device 130 and another user of another mobile device via a suitable cellular network (not shown for simplicity). Thus, for the exemplary embodiment of
During operation of system 100, mobile device 130 receives audio output (A_OUT) signals transmitted from another mobile device (via the cellular network), and then re-transmits the A_OUT signals to wireless headset 120 using transceiver 138. Headset 120 receives the A_OUT signals using its transceiver 128, and then outputs the received A_OUT signals to user 110 via its speaker 122. Headset 120 receives audio signals 125 from user 110 via its microphone 124, and transmits the audio signals 125 as audio signals A_IN to mobile device 130 using its transceiver 128. Mobile device 130 receives the A_IN signals transmitted from headset 120, and then transmits the A_IN signals to another mobile phone using its transceiver 138 (via the cellular network). Mobile device 130 may also receive audio signals 135 from user 110 using its built-in microphone 134, and then transmits the audio signals 135 to another mobile phone using its transceiver 138 (via the cellular network).
Memory 210 may include a parameters table 211 that stores a number of contextual power saving parameters including, for example, one or more audio quality threshold values, one or more audio proximity threshold values, one or more noise threshold values, and/or one or more silent interval threshold values.
Memory 210 may also include a non-transitory computer-readable storage medium (e.g., one or more nonvolatile memory elements, such as EPROM, EEPROM, Flash memory, a hard drive, and so on) that can store the following software modules:
Processor 136, which is coupled to speaker 132, microphone 134, transceiver 138, and memory 210, can be any suitable processor capable of executing scripts or instructions of one or more software programs stored in mobile device 200 (e.g., within memory 210). For example, processor 136 may execute power reduction software module 213 to process audio signals received from user 110 via device microphone 134 and/or headset microphone 124 to selectively disable one or more components of mobile device 200 and/or headset 120.
More specifically, power reduction software module 213 may analyze audio signals 135 received from the device microphone 134 to determine whether to “deactivate” the headset microphone 124 and/or the headset speaker 122 based upon a quality level of the received audio signals 135. For example, upon establishing a connection with mobile device 200, the headset 120 may initially operate in a full-duplex communication mode with mobile device 200. In this mode, mobile device 200 may receive audio signals 135 from user 110 via its built-in microphone 134 while also receiving audio signals 125 from user 110 via headset 120. Subsequently, power reduction software module 213 may deactivate the headset microphone 124 and/or the headset speaker 122 by (i) terminating the wireless link with headset 120, (ii) sending one or more control signals (CTRL) instructing headset 120 to disable its microphone 124 and/or speaker 122 or to power down, or (iii) stop transmitting signals to headset 120, which in turn may be interpreted by headset 120 to disable its components and/or to power down.
For some embodiments, power reduction software module 213 may determine whether audio signals 135 received from user 110 via device microphone 134 are of an “acceptable” quality that allows for a de-activation of headset microphone 124 and/or headset speaker 122, or that alternatively allows for a power-down of headset 120. For example, power reduction software module 213 may compare audio signal 135 with a quality threshold value (QT) to determine whether the quality of audio signal 135 is acceptable (e.g., such that the user's voice is perceptible). If the quality of audio signal 135 is acceptable, then power reduction software module 213 may determine that the audio signal 125 (e.g., received by headset microphone 124 and transmitted to mobile device 200 as signal A_IN) is unnecessary and, in response thereto, deactivate or disable headset microphone 124 and/or power-down headset 120. In this manner, power consumption may be reduced in headset 120. For some embodiments, power reduction software module 213 may terminate reception of A_IN signals from headset 120 while continuing to transmit A_OUT signals to headset 120 (e.g., thereby operating the link between mobile device 130 and headset 120 in a half-duplex or simplex mode).
For other embodiments, power reduction software module 213 and/or privacy software module 215 may determine whether the ambience of user 110 is sufficiently private so that incoming audio signals received by mobile device 200 from another mobile device (via the cellular network) can be output via device speaker 132 instead of transmitted to headset 120 as A_OUT and output by headset speaker 122. If the incoming audio signals can be output by device speaker 132, then headset speaker 122 may be de-activated.
Then, mobile device 130 receives audio input signal 135 via its microphone 134 (320). Thus, device microphone 134 may remain active even after mobile device 130 establishes a connection with headset 120. For some embodiments, mobile device 130 also receives audio signal A_IN from headset 120, wherein audio signal 125 is forwarded from headset 120 to mobile device 130 as the audio signal A_IN.
Next, the power reduction software module 213 determines an audio quality (QA) of the audio signal 135 received by device microphone 134 (330), and compares the audio quality QA with a quality threshold value QT (340). For example, the audio quality QA may indicate an amplitude or overall “loudness” of the audio signal 135, wherein louder audio signals correlate with higher QA values. In some environments, the audio signal 135 may satisfy the quality threshold QT but contain mostly ambient or background noise. Thus, for some embodiments, a more accurate audio quality QA may be determined by comparing the audio signal 135 detected by the device microphone 134 with the audio signal 125 detected by the headset microphone 124 (and transmitted to mobile device 130 as audio signals A_IN).
For some embodiments, power reduction software module 213 may initially assume that the audio signal 125 detected by headset microphone 124 is of a higher quality than the audio signal 135 detected by device microphone 134 (e.g., because headset 120 is typically closer to the user's face than is mobile device 130). For such embodiments, power reduction software module 213 may determine the quality QA of audio signal 135 based upon its similarity with the audio signal A_IN transmitted from headset 120. For one example,
Referring again to
Conversely, if power reduction software module 213 determines that the audio quality QA is below the quality threshold value QT (e.g., as depicted in
The operation 300 may be performed first upon establishing an initial connection between the headset 120 and mobile device 130, and periodically thereafter. For example, because the user 110 is prone to move around, the environment and/or operating conditions of wireless system 100 are likely to change. Accordingly, mobile device 130 may be configured to periodically monitor audio signals 125 received by the headset 120 and/or audio signals 135 received by mobile device 130 to ensure that appropriate power saving techniques are implemented. Note that unless headset 120 is completely disconnected from mobile device 130, subsequent operations 300 may begin at step 320.
Referring again to
For some embodiments, mobile device 130 may determine whether mobile device 130 is within a threshold distance (DT) of headset 120 (e.g., by executing proximity software module 214), and then selectively de-activate one or more components of headset 120. For example, if mobile device 130 is within the threshold distance DT of headset 120 (as depicted in
For at least one embodiment, mobile device 130 may choose to not execute operation 300 if the distance DHM between mobile device 130 and headset 120 is greater than the threshold distance DT. The mobile device 130 may estimate the distance DHM using, for example, the received signal strength indicator (RSSI) of signals received from headset 120. For at least another embodiment, mobile device 130 may choose to execute a portion of operation 300 (e.g., beginning at step 320) only if it determines that mobile device 130 is sufficiently close to headset 120 (e.g., and thus sufficiently close to user 110) such that the audio signal 135 received by mobile device 130 from user 110 is of acceptable quality. In this manner, the proximity information may be used in conjunction with the audio quality information to determine whether to select audio signal 125 received by headset microphone 124 or audio signal 135 received by device microphone 134.
The mobile device 130 estimates the proximity of headset 120 to mobile device 130 (e.g., as indicated by the distance value DHM), and then compares the proximity (or distance value DHM) with the threshold distance value DT (620). The distance between headset 120 and mobile device 130 may be determined in any suitable manner. For some embodiments, the distance DHM may be determined using suitable ranging techniques such as, for example, received signal strength indicator (RSSI) ranging techniques and/or round trip time (RTT) ranging techniques. For some embodiments, the audio quality QA of audio signals received by device microphone 134 may be derived in response to the proximity of headset 120 to mobile device 130 (e.g., the distance between headset 120 to mobile device 130) (625).
If mobile device 130 is within the threshold distance DT of headset 120, as tested at 630, then mobile device 130 may enable (e.g., re-activate) its microphone 134 so that audio signals 135 may be received directly from user 110 (640). Further, to reduce power consumption in headset 120 (and/or to eliminate the reception of redundant audio signals from user 110), mobile device 130 may also deactivate the headset microphone 124 (and also headset speaker 122), and/or may partially or completely terminate the communication link between headset 120 and mobile device 130 (650). Also, for some embodiments, power reduction software module 213 may partially or completely terminate the wireless connection between mobile device 130 and headset 120 (655). For one example, the reception link from headset 120 may be terminated while continuing the transmission link to headset 120, thereby changing the wireless connection from a full-duplex connection to a half-duplex connection. For another example, the headset 120 may be powered down.
Thereafter, mobile device 130 may transmit the audio signals 135 detected by device microphone 134 to another device (e.g., via the cellular network).
Conversely, if mobile device 130 is beyond the threshold distance value DT of headset 120, as tested at 630, then mobile device 130 may maintain headset microphone 124 in its enabled state and therefore receive audio signals 125 detected by headset microphone 124 and transmitted to mobile device 130 from headset 120 (i.e., as audio signals A_IN) (660). For example, the mobile device 130 may receive the A_IN signals from headset 120 without activating (or reactivating) the device microphone 134. Thereafter, mobile device 130 may transmit the audio signals 125 detected by headset microphone 124 and received by mobile device 130 as A_IN to another device (e.g., via the cellular network). For some embodiments, mobile device 130 may also deactivate its own microphone 134 (670).
The operation 600 may be performed first upon establishing an initial connection between the headset 120 and mobile device 130, and periodically thereafter. For example, because user 110 is prone to move around, the environment and/or operating conditions of wireless system 100 are likely to change. Accordingly, mobile device 130 may be configured to periodically monitor the distance between mobile device 130 and headset 120 to ensure that appropriate power saving techniques are implemented. Note that unless headset 120 is completely disconnected from mobile device 130, subsequent operations 600 may begin at step 620.
As mentioned above, the proximity information determined by operation 600 may be used in conjunction with the audio quality information determined by operation 300 of
For some embodiments, mobile device 130 may determine whether user 110 and/or mobile device 130 are in a sufficiently “private” environment so that audio signals can be output to user 110 from the device speaker 132 (e.g., rather than from headset speaker 122). The privacy determination may be made, for example, by executing privacy software module 215 of
Mobile device 130 may also execute privacy software module 215 to detect the presence of multiple human voices in the audio signal A_IN received from headset 120. For example, the presence of other human voices may indicate that persons other than user 110 are able to hear audio signals output by device speaker 132. Accordingly, mobile device 130 may deactivate its speaker 132 in favor of headset speaker 122 to ensure and/or maintain a desired level of privacy for communications intended for user 110. In addition, upon detecting a low privacy level, mobile device 130 may also prevent audio signals from being transmitted or otherwise routed to devices other than headset 120 (e.g., an in-vehicle telephone communication system). For some embodiments, the desired privacy level may be dynamically determined (e.g., by user 110 in response to user input and/or by mobile device 130 in response to various environmental factors). For such embodiments, the desired privacy level may be stored in suitable memory (e.g., memory 210 of mobile device 200 of
For other embodiments, a more accurate estimate of the background noise (which may contain human voices other than that of the user) may be determined using the two available representations (e.g., superimpositions) of the “User Voice+Background Noise” as obtained from headset microphone 124 and from mobile device microphone 134, respectively. The mobile device 130 may analyze this more accurate estimate of background noise to determine whether voices other than that of user 110 are present in the background noise. Thereafter, the privacy level may be determined in response to this qualitative assessment of the background noise.
Note that mobile device 130 may terminate transmission of audio signals A_OUT from itself while continuing to receive audio signals A_IN received from headset 120 in response to audio signals 125 detected by the headset microphone 124, or may terminate the connection with headset 120. Thus, for some embodiments, mobile device 130 may terminate only the headset 120 to mobile device 130 link while keeping the mobile device 130 to headset 120 link active, or alternatively may terminate both links to completely disconnect headset 120, if mobile device 130 determines that (i) the audio quality of signals 135 received by device microphone 134 is greater than the quality threshold level QT and (ii) the ambience of user 110 is sufficiently private so that user 110 is able to use the device speaker 132 instead of the headset speaker 122.
Headset 120 receives audio signal 125 from user 110, and transmits audio signal 125 as audio signal A_IN to mobile device 130. Mobile device 130 receives audio input signal A_IN from headset 120 (720). For some embodiments, the device speaker 132 and device microphone 134 may be deactivated upon establishing the connection between headset 120 and mobile device 130. For other embodiments, mobile device 130 may also receive audio signals 135 from user 110 via its own microphone 134.
Mobile device 130 determines a privacy level (PL) based on the received audio signal A_IN (730), and then compares the privacy level PL with a privacy threshold value PT (740). For some embodiments, privacy software module 215 (see also
For another embodiment, privacy software module 215 may compare the audio signal A_IN received from headset 120 with the audio signal 135 received by the device microphone 134 to determine the volume and/or frequency of background noise components in the received audio signal A_IN. For yet another embodiment, privacy software module 215 may determine the privacy level PL by heuristically combining a number of different factors such as, for example, information indicating a number of occupants in a car as obtained from a car's infotainment system or information indicating a number of nearby wireless devices in the vicinity of mobile device 130, and so on.
Referring again to
Conversely, if privacy software module 215 determines that the privacy level PL is not greater than the threshold value PT, as tested at 740, then mobile device 130 outputs audio signals to the headset speaker 122 (770), and may also deactivate the device speaker 132 to reduce power consumption and/or eliminate duplicative audio signal provided to the user 110 (780). For at least one embodiment, mobile device 130 may also prevent audio signals intended for user 110 from being transmitted to other external audio systems (e.g., an in-vehicle audio system) to maintain privacy of the user's conversation (790).
For example, a user who is actively participating in a conversation using headset 120 may be approaching his car or other vehicle that may contain other persons. Conventional mobile devices typically employ a hand-off procedure that allows an in-car infotainment system to take over functions of headset 120 when the user approaches the car (e.g., to reduce power consumption of headset 120). However, if the car is already occupied by other passengers when the user approaches, then an automatic hand-off procedure may not be desirable because the conversation will be audible to everyone in the car (or other persons close enough to hear sounds output by the in-car infotainment system). Thus, in accordance with the present embodiments, mobile device 130 may determine the user's privacy level and, in response thereto, selectively prevent a hand-off from headset 120 to the in-car infotainment system. In this manner, if the user's car is occupied by other people as the user approaches, mobile device 130 may decide to continue using headset 120 rather than transferring audio functions to the in-car infotainment system.
The exemplary operation 700 of
By selectively deactivating unnecessary (e.g., redundant or duplicative) microphones 124 and 134 and speakers 122 and 132 in the wireless headset 120 and mobile device 130, respectively, the present embodiments may not only reduce power consumption in wireless headset 120 and/or mobile device 130 but also improve the sound quality of conversations facilitated by wireless headset 120 and mobile device 130. In addition, the present embodiments may also be used to ensure and/or maintain a desired level of privacy for user 110, as described above.
As mentioned above with respect to
More specifically, for some embodiments, noise cancellation software module 216 may use audio signals 135 received by the device microphone 134 to filter (e.g., remove) ambient or background noise components 825 in the audio signals 125 detected by headset microphone 124. For example, because the distance (DH) between user 110 and headset 120 may be different from the distance (DM) between user 110 and mobile device 130, audio signals 125 detected by headset microphone 124 may be different from audio signals 135 detected by device microphone 134 (and noise components 825 in audio signals 125 may be different than noise components 835 in audio signals 135). Thus, for some embodiments, noise cancellation software module 216 may detect differences between the audio signals 125 and audio signals 135 to filter unwanted noise components 825 and/or unwanted noise components 835.
More specifically, noise cancellation techniques are typically based upon a determination of background noise, which in turn may be performed using multiple microphones physically spaced apart. Greater distances between the microphones allows suitable signal processing techniques to be more effective in separating and attenuating background noise components. Although conventional noise cancelling wireless headsets may employ multiple microphones to obtain different audio samples, the physical separation of microphones on such headsets is limited by the small form factor of such headsets. Accordingly, the present embodiments may allow for more effective noise cancellation operations than conventional techniques by using both the headset microphone(s) 124 and the mobile device microphone(s) 134 to obtain multiple audio samples of the background noise, wherein the amount of physical separation between the headset microphone(s) 124 and the mobile device microphone(s) 134 may be much greater than the physical dimensions of headset 120. Note that estimation of the background noise may be performed periodically or may be triggered whenever an audio quality level drops below a certain threshold value (e.g., below the quality threshold value QT).
Thus, for some embodiments, the relative proximity of headset 120 to user 110 (as compared to the proximity of mobile device 130 to user 110) may also be used as an indication of the differences in audio signals 125 detected by headset microphone 124 and audio signals 135 detected by device microphone 134. The effectiveness of the noise cancellation operation 900 of
Referring again to
Accordingly, for some embodiments, mobile device 130 may employ packet loss concealment techniques during time intervals in which mobile device 130 either (i) does not receive packets or frames or (ii) receives packets containing errors from headset 120. During such intervals, it may be desirable to transmit local samples of audio signals (e.g., received by mobile device microphone 134) to the other mobile device (via the cellular network) rather than transmitting silent or interpolated packets because the local samples may contain components of the user 110's voice. More specifically, although components of user 110's voice contained in the local samples received by device microphone 134 may not be as strong as components of user 110's voice contained in audio signals 125 received by headset microphone 124, the local samples may provide a better estimate of user 110's voice than audio signals 125 during the packet loss periods. Thus, for some embodiments, the local samples received by device microphone 134 may be used to perform packet loss concealment operations (e.g., especially when synchronous connections with zero or limited retransmissions are used). Further, for some embodiments, upon detecting RF interference resulting in high packet error rates, mobile device 130 may employ packet loss concealment operations described herein to avoid re-transmissions in synchronous connections without adversely affecting audio quality.
Then, PLC frame software module 217 generates PLC frames based on audio signal 135 received from device microphone 134 (1120). For some embodiments, PLC frame software module 217 generates PLC frames for the entire duration of audio signal 135. For example, referring also to
Next, PLC frame software module 217 detects whether there is a packet loss period (1130). As mentioned above, the packet loss period may correspond to actual packet loss on the link between headset 120 and mobile device 130 or to a silent period in user 110's voice. As long as headset 120 remains connected to mobile device 130, mobile device 130 may expect to receive continuous streams of A_IN signals from headset 120. However, as discussed above, headset 120 may not transmit A_IN signals to mobile device 130 during time periods that user 110 is not speaking (e.g., to save power), thereby causing packet loss on the link between headset 120 and mobile device 130. Furthermore, even if headset 120 transmits A_IN signals continuously, various external sources of interference may prevent the A_IN signals from reaching mobile device 130. Thus, as depicted in
If PLC frame software module 217 does not detect a packet loss period, as tested at 1130, then mobile device 130 may continue transmitting data frames corresponding to the received A_IN signals to the other receiving device (via the cellular network) (1140). For some embodiments, PLC frame software module 217 may continue generating PLC frames in parallel with generating the data frames representing the received A_IN signals.
Conversely, if PLC frame software module 217 detects a packet loss period, as tested at 1130, then the PLC frame software module 217 may replace missing data frames corresponding to the A_IN signal with one or more PLC frames (1150). For example, as depicted in
In some instances, the PLC frames transmitted during silent interval 1210 may contain primarily background noise. However, because the background noise detected by device microphone 134 may be substantially similar to the background noise detected by headset microphone 124, the PLC frames transmitted to the other receiving device may be incorporated seamlessly with adjacent data frames corresponding to the A_IN signal. In other instances (e.g., where the packet loss results from RF interference and not by an absence of the user's voice), the PLC frames may contain one or more portions of an intended audio input (e.g., the user's voice). Although there may be differences (e.g., in loudness and/or clarity) in the intended audio components of audio signal 135 and audio signal 125, the PLC packets sent to the other receiving device may sound much more “natural” (e.g., than the silent interval) to a user of the other receiving device.
It will be appreciated that all of the embodiments described herein may be implemented within mobile device 130. Accordingly, the power saving techniques, privacy techniques, noise cancellation techniques, and/or packet loss concealment techniques described herein may be performed with existing wireless headsets.
In the foregoing specification, the present embodiments have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. For example, the method steps depicted in the flow charts of