The present application relates to proximity detection and, more particularly, to proximity detection based on audio signals.
Proximity detection can be useful in a variety of contexts. For example, proximity detection may be utilized to activate certain devices and/or features of those devices. In particular, a speaker volume and/or display activation may be operatively related to detection of the speaker or display being proximately located with an object. Generally, proximity detection in computing devices and mobile electronic devices has been implemented utilizing infrared (IR), light sensors or, even, active sonar based sensing. However, each of these techniques may include dedicated components which may increase the size, weight and/or cost of manufacture of the devices in which they are implemented.
Techniques and devices are disclosed for utilizing passively received audio signals to determine proximity of devices to other objects. One embodiment may take the form of a method of passive proximity detection is provided that includes sensing a first sound wave using an audio transducer and generating a first signal representative of the first sound wave. The method also includes sensing a second sound wave and generating a second signal. The method further includes comparing characteristics of the first and second signals using a processor to determine if differences between the first and second signals indicate the first audio transducer is proximately located to another object.
Another embodiment takes the form of a computing device that includes a processor and a storage device coupled to the processor. The storage device stores instructions executable by the processor to determine proximity of the device to another object based on signals. A first audio transducer of the device is configured to sense ambient sound waves and convert the sensed sound waves into an electrical data signal. The device is configured to store a first electrical data signal from the first audio transducer and a second electrical data signal and determine if differences between the first and second electrical data signals indicate that the first audio transducer is proximately located to an object external to the device.
One embodiment takes the form of a communication device configured to passively determine proximity to other objects. The device includes a multi-sided housing having an aperture to allow for receipt and transmission of sound waves therethrough and an audio transducer located within the housing adjacent to the aperture. The audio transducer is configured to sense ambient sound waves. The device includes a processor configured to evaluate signals generated by the audio transducer to determine if the device is in proximity to other objects based on at least one of: a narrowing of a spike about a frequency, an amplification of a spike, an increase in low frequency signals, and a diminution of received sound signals indicating muffling.
While multiple embodiments are disclosed, still other embodiments of the present invention will become apparent to those skilled in the art from the following Detailed Description. As will be realized, the embodiments are capable of modifications in various aspects, all without departing from the spirit and scope of the embodiments. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not restrictive.
The use of passively received audio signals to determine proximity of objects to devices such as mobile devices (e.g., media players, phones, and the like) is provided. Generally, audio transducers (e.g., microphones) and each transducer's geometry within the device imprint an equalization curve upon the broad spectrum of audio signals received. That is, audio transducers are subject to a specific set of attenuations based on each transducer's composition and its orientation in space relative to other objects. An equalization curve generally illustrates a relative amplitude of particular frequencies within the spectrum of received sounds.
A given audio transducer's signal equalization curve resulting from a given audio source is generally modified as the microphone is brought near an object or surface, because objects/surfaces variably reflect elements of the sound wave, thereby changing the equalization curve. This effect may be noticed when sound is reflected by soft material as opposed to a hard surface. Generally, sound reflected off the soft surface will seem muted when compared to the same sound reflected off a hard surface located at the same distance and angle from an audio transducer and a sound source. Additionally, a temporary resonant chamber may be created with the device. This may be noticed, for example, by placing a hand over an ear at some angle to simulate a resonant chamber that a device may create. Further, muffling of the incoming sound will result as the surfaces are brought into close proximity such that sound waves from the audio source are obscured or blocked out.
In some embodiments, a comparison of differences in received audio signals at one or more audio transducers, located on different planes of the device, or at some distance from each other along the device, are used to detect proximity of objects. Specifically, per acoustic principles, sound-wave propagation and interference by objects cause detectable shifts in broad-audio-spectrum response to ambient sound to indicate user-presence in close-proximity to the device. To relate this to a common phenomenon, when a sea shell is held up to one's ear a resonant cavity is formed that amplifies ambient sounds. This hi-Q filtering results in the ocean like sounds one hears.
This phenomenon can be demonstrated by analysis of real-time differences between two transducers' signals. For example, if transducers are located on opposite sides of a cell-phone a transducer will produce a relatively hi-Q peak audio-spectrum response relative to the other transducer when the first transducer is brought into close proximity with a surface (e.g., the user's face). In some instances, a comparison may be made based on changes in the response of a single transducer over time. That is, the transducer's response may be monitored when not pressed against the users' face and compared with when it is pressed against or close to the user's face. When the transducer is near the user's face it may provide a relatively hi-Q signal (e.g., peaked in a tight region) or may provide variable but characteristic response in a particular region of the spectrum. The differences in the responses can be amplified by electronic and/or software techniques to increase proximity detection sensitivity.
In other embodiments, a single transducer (such as a microphone) may be used to accomplish proximity detection by comparing before and after signals. In particular, a first sample is taken and used as a baseline for determining proximity detection. This may be referred to as the “before” signal. A second sample is taken (e.g., an “after” signal) and compared to with the first sample to determine if differences between the before signal and after signal indicate a change in proximity.
In some embodiments, the timing of the sampling may be triggered by a user's action. For example, the first sample may be taken soon after a number has been dialed into a cell-phone. The second sample may be taken at some point thereafter (e.g., when the user may be expected to have the device in close proximity to his/her face).
In still other embodiments, live input to a transducer may be utilized to determine when the device is held against a surface (e.g., against a user's face). This input will have relatively high amplitude low-end signals, with attenuated high-end signals. This is generally a tell-tale response of the transducer being in close proximity with a surface and, accordingly, can be interpreted as an indication of proximity. As such, the user's voice may be used to determine proximity. Further, ambient/background noise such as traffic, rushing water, music, conversations, and other ambient noise/sounds may be used for the proximity determination.
In some embodiments, speakers, earpieces, receivers, and the like, which normally transform electrical signals to audible output, can be used as audio input devices/microphones. This may be achieved by switching the connections of the transducers so that electrical signals generated when sound waves cause a diaphragm of the speaker to move may be detected. That is, rather than providing an input signal to the device, a signal generated by the device is sensed as an input.
Turning to the drawings and referring initially to
In some embodiments, one or more other input/output (I/O) devices may also be provided. For example, a camera 114, one or more buttons 116, and/or switches 118 may be provided and may provide various functionality to the device 100, such as volume control, power control, and so forth.
The device 100 may include one or more processors 120, as illustrated in
Additionally, the I/O devices, such as audio transducers 107, 109; the display 104, the camera 114, and buttons 116, may be coupled to the processor 120 either directly or indirectly. In some embodiments, the processor 120, memory 124 and I/O interfaces may be provided on a single chip as part of a system-on-a-chip (SOC) design, while in other embodiments, the processor, memory and I/O devices may be communicatively coupled together by traces or vias on/in a board. In some embodiments, a dedicated digital signal processor 128 (DSP) may be provided. The DSP may be configured to process received audio signals to determine if the device 100 is in proximity with another object. It should be appreciated that the device 100 can have additional components not shown, or can omit certain components in different embodiments. Thus, the device 100 is shown as a general overview only and is not intended to be exhaustive or indicate all possible connections between components, or all possible component configurations.
Referring to
The differences between the first equalization curve 140 and the second equalization curve 150 may be analyzed using digital signal processing software. For example, fast-Fourier transform windows may be applied to time limited portions of the curves to determine the differences and to determine if a particular equalization curve represents proximity of the device with another object. Further, the differences between the two curves in a given sampled interval may be further amplified by electronics and/or software to increase the sensitivity of proximity detection.
In some embodiments, sound waves 130 may be sensed using multiple audio transducers. For example, a first transducer may be located on the face of the device 100 and a second transducer may be located on a back side of the device. Generally, the signals generated by the two transducers based on the ambient noise will be similar unless one of the sides of the device is located proximately to another object. For example, as the face 102 of the device 100 approaches or is located proximately to a user while in use, the equalization curve from the first transducer may resemble curve 150, while the curve from the second transducer may resemble curve 140. Hence, it may be determined that the face 102 of the device 100 is proximately located to another object.
In other embodiments, a single transducer may be utilized. In particular, samples of sound may be taken by the single transducer at different times (e.g., periodically, intermittently or after certain events have occurred such as after a phone number has been dialed). The samples taken at different times may be compared against each other to determine if there has been any band limiting effects, amplification of certain frequencies, and/or muffling to indicate proximity of the device to another object. In some embodiments, an on-the-fly analysis may be performed with a first sample being taken and then subsequently compared with other samples. Changes that occur relative to the first sample may indicate that the transducer has moved into proximity of an object or that it has moved away from an object (e.g., the first sample was taken while in proximity to the object). Alternatively, profiles may be created for the determination of proximity. That is a proximity profile may be used to compare with the samples to see if the samples approximate the proximity profile. If they do, then it may indicate that the transducer is in proximity with an object.
In some embodiments, one or more directional microphones may be utilized as the audio transducers to take advantage of the proximity effect. The proximity effect generally involves three different aspects of operation of directional microphones, namely: angular dependence, phase difference and amplitude.
Generally, directional microphones are constructed having a diaphragm whose mechanical movement is translated into electrical signals. Movement of the diaphragm is based on air pressure differences from sound waves reaching both a front and rear of the diaphragm. Specifically, sound waves reflected from surfaces behind the diaphragm are allowed to be incident to the rear of the diaphragm, whereas the front of the diaphragm receives direct sound waves. Since the sound waves reaching the rear of the diaphragm travel further, they are out of phase with those that reach the front. This phase difference causes displacement of the diaphragm. The phase difference is accentuated when the source of sound is axial to the diaphragm, as the waves incident to the rear of the diaphragm travel a relatively farther distance than when the sound source is not axial to the diaphragm.
Additionally, the phase difference across the diaphragm is smallest at low frequencies because the difference in path length is a smaller portion of the wavelength of the sound. Further, the amplitude of the sound waves that are incident to the front of the diaphragm is larger than those that are incident to the rear of the diaphragm. This is because of the distance the sound waves travel and the inverse square law of attenuation. Generally, the inverse square law is an extension of the law of conservation of energy and relates to the dispersion of a sound wave as they travel outwardly from a source. Specifically, the energy in the sound wave decreases in amount equal to the inverse of the distance from the source squared. As the source is brought close to the diaphragm, the amplitude component of the pressure differences increases, particularly at lower frequencies. The resulting phenomenon may be seen as an increase in the low end (e.g., bass) signals produced by the microphone and is commonly referred to as the “proximity effect.”
In some embodiments, characteristics of live input to the transducers while in proximity to an object, such as a user's face, may be used as an indicator of proximity. Due to the proximity effect, the received signals will have a large low end signal and relatively little high end signal, as shown in
This proximity effect may also be useful in the case of the device being located in a confined space, such as a purse or pocket. For example, occasionally, a call may be unintentionally placed while the device is in a confined location such as a purse or pocket. If, when the outgoing ring is sensed by the transducers to be weighted towards the low end of the spectrum, it may be determined that the device 100 is in a confined location, such as a pocket or purse. Upon determining that it is located in a purse or pocket, device 100 may end the call and/or the device may vibrate to let the user know that an inadvertent call has been terminated. In some embodiments, the passive proximity sensing may begin before the outgoing call is even placed. For example, it may begin the moment dialing commences or when a contact is dialed. Additionally, accidental dialing may similarly be treated. In one example, a user may have an odd grasp on the phone (e.g., for a two-finger pinch, front-to-back, with a thumb covering the earpiece hole in which the muffle sensing transducer is located). Instead of dialing, the device may provide a warning tone or vibration and or ask the user if the call should be connected.
Although the examples to this point have included one or two transducers, in some embodiments, more than two audio transducers 107, 108 may be implemented as shown in
In some embodiments, a microphone may be positioned adjacent to a speaker of the device. In particular, in some embodiment, a common aperture, such as aperture 106, may be shared by the transducer (speaker) 107 and the transducer (microphone) 182, as shown in
In some embodiments, two microphones may be placed adjacent to each other. Specifically, an omindirectional microphone and a directional microphone may be positioned adjacent to each other. The signals from the omindirectional microphone may be compared with one another and with those of the directional microphone to make a determination as to the proximity of the device to an object. In some embodiments, a baseline difference between the signals of the two microphones may be amplified to provide more sensitive proximity detection.
It should be appreciated that the audio transducers discussed herein may be implemented in any suitable form. For example, electric condenser microphones, piezo-electric microphone, micro-electromechanical microphones, dynamic microphones, and so forth may be implemented. Moreover, their polar patterns may include omindirectional, cardiod, directional and/or other patterns. Further, in some embodiments, a narrow band microphone may be implemented to focus attention on a particular band of frequencies. In some embodiments, a band of frequencies outside the range of normal human hearing may be utilized for the proximity determination.
Further, some devices may be configured with multiple microphones that may be utilized and/or repurposed for proximity detection. Additionally, digital signal processing software and hardware may already be provided within existing devices that may be utilized for making the proximity determination. Accordingly, the present techniques may be implemented without significant addition of costs or labor to current manufacture.
The foregoing discussion describes some example embodiments for passively determining object proximity. Although the foregoing discussion has presented specific embodiments, persons skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the embodiments. Accordingly, the specific embodiments described herein should be understood as examples and not limiting the scope thereof.
Number | Name | Date | Kind |
---|---|---|---|
4081631 | Feder | Mar 1978 | A |
4658425 | Julstrom | Apr 1987 | A |
4684899 | Carpentier | Aug 1987 | A |
5060206 | DeMetz, Sr. | Oct 1991 | A |
5335011 | Addeo et al. | Aug 1994 | A |
5570324 | Geil | Oct 1996 | A |
5619583 | Page et al. | Apr 1997 | A |
5691697 | Carvalho et al. | Nov 1997 | A |
6073033 | Campo | Jun 2000 | A |
6129582 | Wilhite et al. | Oct 2000 | A |
6138040 | Nicholls et al. | Oct 2000 | A |
6151401 | Annaratone | Nov 2000 | A |
6154551 | Frenkel | Nov 2000 | A |
6192253 | Charlier et al. | Feb 2001 | B1 |
6246761 | Cuddy | Jun 2001 | B1 |
6317237 | Nakao et al. | Nov 2001 | B1 |
6813218 | Antonelli et al. | Nov 2004 | B1 |
6829018 | Lin et al. | Dec 2004 | B2 |
6882335 | Saarinen | Apr 2005 | B2 |
6934394 | Anderson | Aug 2005 | B1 |
7003099 | Zhang et al. | Feb 2006 | B1 |
7054450 | McIntosh et al. | May 2006 | B2 |
7082322 | Harano | Jul 2006 | B2 |
7154526 | Foote et al. | Dec 2006 | B2 |
7158647 | Azima et al. | Jan 2007 | B2 |
7263373 | Mattisson | Aug 2007 | B2 |
7266189 | Day | Sep 2007 | B1 |
7346315 | Zurek et al. | Mar 2008 | B2 |
7378963 | Begault et al. | May 2008 | B1 |
7536029 | Choi et al. | May 2009 | B2 |
8030914 | Alameh et al. | Oct 2011 | B2 |
8300845 | Zurek et al. | Oct 2012 | B2 |
20040203520 | Schirtzinger et al. | Oct 2004 | A1 |
20050271216 | Lashkari | Dec 2005 | A1 |
20060072248 | Watanabe et al. | Apr 2006 | A1 |
20080175408 | Mukund et al. | Jul 2008 | A1 |
20080204379 | Perez-Noguera | Aug 2008 | A1 |
20080292112 | Valenzuela et al. | Nov 2008 | A1 |
20090247237 | Mittleman et al. | Oct 2009 | A1 |
20090274315 | Carnes et al. | Nov 2009 | A1 |
20090316943 | Munoz et al. | Dec 2009 | A1 |
20100080084 | Chen et al. | Apr 2010 | A1 |
20100103776 | Chan | Apr 2010 | A1 |
20110002487 | Panther et al. | Jan 2011 | A1 |
20110033064 | Johnson et al. | Feb 2011 | A1 |
20110161074 | Pance et al. | Jun 2011 | A1 |
20110274303 | Filson et al. | Nov 2011 | A1 |
20120082317 | Pance et al. | Apr 2012 | A1 |
Number | Date | Country |
---|---|---|
2094032 | Aug 2009 | EP |
2310559 | Aug 1997 | GB |
2342802 | Apr 2000 | GB |
2102905 | Apr 1990 | JP |
WO03049494 | Jun 2003 | WO |
WO2004025938 | Mar 2004 | WO |
WO2007083894 | Jul 2007 | WO |
WO2008153639 | Dec 2008 | WO |
WO2009017280 | Feb 2009 | WO |
WO2011057346 | May 2011 | WO |
Entry |
---|
Baechtle et al., “Adjustable Audio Indicator,” IBM, 2 pages, Jul. 1, 1984. |
Pingali et al., “Audio-Visual Tracking for Natural Interactivity,” Bell Laboratories, Lucent Technologies, pp. 373-382, Oct. 1999. |
Number | Date | Country | |
---|---|---|---|
20120263019 A1 | Oct 2012 | US |