This invention generally relates to communication. More particularly, this invention relates to sound control for facilitating communications.
Portable communication devices are in widespread use. Cellular phones, personal digital assistants and notebook computers are extremely popular. As the capabilities and functionalities of these devices increase, the various uses for them increase.
One limitation on portable communication devices is that the loudspeaker of such a device typically does not deliver high quality sound. When such a device is used for observing video, the associated audio typically has poor quality. For example, it would be useful to conduct a video teleconference using a video-capable portable communication device. The sound quality from the loudspeaker of the portable communication device, however, may be poor enough to discourage such use of the device. Similarly, although many portable communication devices have video playback capability, the associated audio output leaves much to be desired.
Attempting to utilize an external audio output with a portable communication device can improve the sound quality. There are significant challenges, however, because of the portability of the communication device. The spatial relationship between an external loudspeaker and the portable communication device can vary during a single use of the device for that purpose. This produces a lack of a cohesive visual and auditory experience. It is very unnatural, for example, for an individual observing a video display of a portable communication device while hearing the associated audio emanating from some arbitrary location in the room in which the individual is situated. Even high quality sound systems will not provide any spatial cohesiveness between the audio output and the video observed on the portable communication device. This lack of cohesiveness leaves much to be desired and would discourage individuals from attempting to utilize their portable communication device in such a manner.
An exemplary method of facilitating communication includes determining a position of a portable communication device that generates a video output. A sound output control is provided to an audio device that is distinct from the portable communication device for directing a sound output from the audio device based on the determined position of the portable communication device.
An exemplary portable communication device includes a video output. A position sensor provides an indication of a position of the portable communication device. A sound control module is configured to communicate sound control to an audio device that is distinct from the portable communication device. The sound control is based on the position of the portable communication device for facilitating the audio device directing a sound output based on the position of the portable communication device.
The various features and advantages of the disclosed examples will become apparent to those skilled in the art from the following detailed description. The drawings that accompany the detailed description can be briefly described as follows.
A position determining module 24 provides an indication of a position of the portable communication device 20. The position determining module 24 in this example is capable of providing position information in six dimensions. For example, the position information may include location information in Cartesian coordinates (i.e., x, y, z) and orientation information (i.e., azimuth, elevation and roll). There are known six degree-of-freedom position sensors. One example includes such a known position sensor.
The portable communication device 20 in this example also includes a sound control module 26 such as an audio spatializer. The example portable communication device 20 is capable of being associated with an audio device 30 that is distinct from the portable communication device 20. Example audio devices 30 include headphones or speakers. A hardwired or wireless link allows for the audio device 30 to provide audio output based on information from the sound control module 26. The audio output comprises audio that would otherwise be provided from a loudspeaker of the communication device 20 but, instead, is provided by the distinct audio device 30. The sound control module 26 provides information to the audio device 30 to allow for the desired sound output to be generated by that device.
The sound control module 26 controls sound production based on position information from the position determining module 24. This allows for directing the sound from the audio device 30 so that it has the appearance of emanating from the position of the portable communication device 20. This feature provides spatial cohesiveness between the video output 22 and the sound output from the audio device 30. This allows for high quality sound to be associated with video with spatial cohesiveness that greatly enhances the experience of an individual using the portable communication device 20 to obtain a video output and a distinct audio device 30 to provide high quality sound.
It is possible to utilize the position of a portable communication device relative to a fixed reference point for a given audio device 30. For example, the audio device may include loudspeakers that remain in a fixed position within a particular area and the position of the portable communication device 20 within that area may be determined relative to a selected reference point. It is also possible to determine the observer location of an individual relative to such a fixed reference point. In some examples, the position of the portable communication device or the position of the individual can be used as a reference point so that the determined position of the other is relative to that reference point. In one example, the position of the portable communication device 20 is used as a reference point for purposes of determining the observer position relative to the determined position of the portable communication device 20.
The audio stream is schematically shown at 62. The video stream is schematically shown at 64.
An audio decoder 84 decodes the audio information. Given that the audio output to an individual is intended to be position-oriented, the position determination module 24 obtains position information from a position sensor 86, for example, that provides an indication of a position of the portable communication device 20. The position determination module 24 in this example also receives position information from an external position sensor 88 that is capable of providing an indication of an observer's position (e.g., the location and orientation of headphones worn by an individual). The position determination module 24 makes a determination regarding the position information that is useful for directing the sound output in a manner that provides cohesiveness between the video provided by the portable communication device 20 and the audio output.
In the illustrated example, the portable communication device 20 includes an audio spatializer 90 that provides position-directed audio output information to the audio device 30. Alternatively, the audio information from the audio decoder 84 and the position information from the position determination module 24 are provided to an external audio spatializer 92 that controls the output from the audio device 30 to achieve the spatially-directed sound output from the audio device 30.
The video stream 64 is processed by a video decoder 74. The display of the video information in this example is provided on at least one of an internal video display screen 76 or through a video projector 78 that projects a video beam 80 so that the video may be observed on a nearby surface.
An observer position determining portion 102 provides an indication of the position of the observer (e.g., the observer's head). The output 104 from the observer position determining portion 102 is in terms of Cartesian coordinates (xL, yL, zL) and Euler orientation angles (azimuth θL, elevation φL, and roll ψL) in this example.
As known, Euler orientation angles are one way of representing the orientation of an object with respect to a reference coordinate system. These angles represent a sequence of rotation of the reference coordinate system, starting with an azimuth rotation, followed by an elevation rotation, followed by a roll rotation. The azimuth angle is defined as a rotation of the X and Y reference axes around the Z axis resulting in rotated axes X′ and Y′. The elevation angle can then be defined as a rotation of the Z and X′ axes around the Y′ axis resulting in rotated axes Z′ and X″. Finally, the roll angle is defined as a rotation of the Y′ and Z′ axes around X″ resulting in rotated axes Y″ and Z″. The new coordinate system X″ Y″ Z″ determines the orientation of the object.
Depending on the video display type (built-in video versus projected video), only a subset of the parameters output by the device position determining portion 101 may be used for further processing. Such a subset can be the Cartesian coordinates (xD, yD, zD) of the portable device, for example. If video is displayed using the projector 78 of portable communication device 20, the Cartesian coordinates (xD, yD, zD) and at least the two orientation parameters azimuth θD and elevation φD are desirable for further processing, since orientation angles θD and φD are generally required for correct determination of the location of the projected video. On the other hand, if video is displayed on the display screen 22, orientation angles may be neglected for the purpose of creating cues for the direction of the sound source. Orientation angles can still be used if it is intended to provide additional cues for the orientation of the sound source. Such cues for the device orientation can be created for example by changing the ratio of direct sound to reverberated sound. When the display 22 of the portable device 20 directly faces the observer, the sound output provides more direct sound than it would if the portable device 20 were turned away from the observer, for example. Additionally or alternatively, a frequency-dependant polar pattern of a human talker can be modeled, whereby in a simple model, the high frequencies are gradually attenuated the more the device is turned away from the observer.
Depending on the acoustic reproduction of sound, only a subset of the parameters 104 from the observer position determining portion 102 may be used for further processing. One example subset comprises the Cartesian coordinates (xL, yL, zL) of the observer.
For sound reproduction when the audio device 30 comprises headphones, at least the Cartesian coordinates (xL, yL, zL) and azimuth θL and elevation φL are desirable from the parameter set 104. The azimuth and elevation indications regarding the observer enable the audio spatializer 90 to compensate for the observer's head orientation. Without such compensation, the sound field will turn with the observer's head due to the sound reproduction over headphones. For example, assume a sound source straight in the front of the observer. If the observer turns his head by 90 degrees counterclockwise, the synthesized sound source direction will continuously be adjusted during this rotation to finally appear from the right side of his head. In this way, regardless of the head rotation, the sound source location will always be fixed in space for the observer. Only if the portable device is moved will the sound source move in space.
For sound reproduction over loudspeakers on the other hand, the orientation angles may be neglected, that is, one may only use the subset (xL, yL, zL) of the parameter set 104. In some cases of sound reproduction over loudspeakers, it is still beneficial to use head orientation when spatializing sound. One example includes binaural sound reproduction with cross-talk canceller, which is a well-known technique.
Generally, orientation is specified in three dimensions, azimuth, elevation, and roll. In many practical situations however, azimuth and elevation angles alone determine sound direction to a great extent. For simplicity, roll angles are not further considered in the following discussion. Nevertheless, roll angles of the observer and the device 20 may be applied for further improvement of spatialized audio.
The position determining portions 101 and 102 do not assume any specific reference coordinate system to express the positions of the portable communication device 20 and an observer. In other words, the reference coordinate system can be chosen arbitrarily. In particular, it can be chosen to coincide either with the 6-dimensional position of the moving portable device or with the 6-dimensional position of the moving observer.
If the portable communication device 20 provides a video output through the built-in projector 78, the observable video will be displayed on a wall or projection screen, for example. The example of
The example of
To derive equations for these five variables, express the position of the observer as a vector,
pL=(xLyLzL)T,
where superscript T denotes a known transpose operation. Likewise, we express the position of the video display as a vector
pV=(xVyVzV)T,
and the position of the portable device as a vector
pD=(xDyDzD)T.
In examples in which the video is displayed on the portable device, itself, pV=pD. If, on the other hand, the video is displayed from the projector 78 the video position is not known directly from the sensor in the portable device 20. Instead, a determination is needed regarding the video position based on the Cartesian coordinates of the portable device (pD), its orientation (θD φD), and its distance to the wall or projection screen (DP) according to the following equation:
pv=pD+ρ·DP, (1)
where ρ is the unity vector in the direction from the portable device 20 to the projected video, given by
ρ=(cos θD cos φD sin θD cos φD sin φD)T
The value of Equation (1) may be controlled based on whether the video is displayed on a built-in video display 22 (e.g., DP=0) or the video is displayed by the projector 78 (e.g., DP≠0).
The distance of the desired sound source can be calculated using the known norm function
DS=∥pV−pL∥ (2)
The desired sound source azimuth angle θS and elevation angle φS are then given by the following equations:
θS=tan−1{(yV−yL)/(xV−xL)}−θL (3)
φS=sin−1{(zV−zL)/DS}−φL (4)
Equations (1)-(4) are used to spatialize sound to match the video location. In particular, the sound source distance DS in Equation (2) may be used to synthesize potential distance cues. Equations (3) and (4) are used to position the sound source at the correct azimuth and elevation angles. These equations make use of the observer's head position (xL, yL, zL) and the observer's head orientation (θL and φL). Accounting for head orientation is most relevant when rendering audio output over headphones because it allows for accommodating any rotation or tilting of the observer's head.
Equations (3) and (4) are given for sound rendering over headphones. For rendering over loudspeakers, observer orientation (θL, φL) can in general be neglected (e.g., set θL=0 and φL=0). Although these angles may not be used for the purpose of sound source positioning, they may be used to calculate correct head-related transfer functions (HRTF) in a trans-aural loudspeaker system.
The device orientation can be used to simulate the polar pattern of the sound source. A polar pattern or directivity pattern determines the sound level a sound source emits, specified for a range of 360 degrees. A more general case involves a 3-D polar pattern that specifies the sound level emitted over a spherical range. As such, a 3-D polar pattern can be determined for example for a human talker or a musical instrument. Polar patterns are commonly frequency dependant, that is, they are specified at different frequencies.
To relate the orientation of the mobile device to a desired polar pattern, one example includes determining the relative orientation of the device with respect to the observer position. For this purpose, we express the vector from observer to mobile device,
pO=pV−pL,
in terms of the coordinate system of the portable communication device 20 using a well known coordinate transformation. The vector from the observer to the device 20 in the device's coordinate system becomes
pO′=RpO,
where
Vector pO′ relates directly to the polar pattern. Azimuth and elevation angles of the transformed vector
pO′=(xO′yO′zO′)T
are determined by
θV=tan−1{(yO′)/(xO′)} (5)
φV=tan−1{(zO′)/√{square root over ((xO′)2+(yO′)2)}{square root over ((xO′)2+(yO′)2)}} (6)
Given a desired polar pattern, one example includes evaluating this polar pattern at azimuth angle θV and elevation angle φV. This evaluation is done along the frequency axis to obtain a frequency response for each pair of azimuth and elevation angles.
Once the relative position of video is determined by the relative position determining portion 107 as exemplified by Equations (2)-(6), the resulting parameters are provided to the spatializer 90.
The audio spatializer 90 receives the audio information from the audio decoder 84 and produces spatial audio that matches video location so that the output of the audio device 30 (e.g., headphones or loudspeakers) is based on the positional information from the position determination module 24.
For the case where audio input is mixed down to a monaural signal, the matrix operation portion 201 reduces to a vector operation. For the trivial case of monaural input, the matrix operation 201 is represented by
y1/0=M1/0·x1/0,
where x1/0=x denotes the scalar input sample, and y1/0 denotes the output sample. For the trivial case of monaural input, the matrix reduces to a scalar, that is, M1/0=1.
For 2-channel stereo input, the matrix operation 201 is represented by
y1/0=M2/0·x2/0,
where x2/0=[xL xR]T denotes the 2-dimensional input vector for left and right channel sample, y1/0 denotes the monaural output sample, and M2/0=[0.5 0.5].
For 5-channel stereo input, the matrix operation 201 is represented by
y1/0=M3/2·x3/2
where x3/2=[xL xR xC xLS xRS]T denotes the 5-dimensional input vector for left, right, center, left surround, and right surround channel and y1/0 denotes the monaural output sample. For example, M3/2=[0.7071 0.7071 1.000 0.500 0.500] according to ITU-R BS.775-2 recommendation, or alternatively {tilde over (M)}3/2=[0.333 0.333 0.333 0 0] to reduce the amount of reverberant sound.
When audio input is in Ambisonic format, subsequent processing may use this format directly, since the Ambisonic format allows sound field rotation. For example, an Ambisonic B-format signal set consists of four channels, commonly denoted as W,X,Y,Z signals. Subsequent Ambisonic processing in the matrix operation portion 201 includes simply passing the signals through
yA=MA·xA
with xA=[xW xX xY xZ]T being the input vector, and MA the identity matrix of size four. Alternatively, one may for example choose the omni-directional channel W of the Ambisonic format to further proceed with a monaural signal.
A polar pattern simulator 202 simulates the polar pattern or directivity of the created sound source. The polar pattern indicates the sound level at which sound is emitted from the audio device 30 for different horizontal and vertical angles. It is typically specified at various frequencies. Consider the polar pattern of a human talker. As an individual's face turns 180 degrees from front to back, the sound level reduces for the observer. This is particularly true at high frequencies. The polar pattern simulator 202 in one example stores polar patterns as look up tables or computes models for the polar pattern based on azimuth and elevation angles. Equivalent to a frequency-dependant polar pattern, a frequency response can be specified at different azimuth/elevation angle combinations. That is, a filter response for the polar pattern can be specified as a function of frequency and angles θV, φV. To give a simple example, consider only the horizontal orientation angle θV, which is defined as the angle between the two vectors that are defined by the orientation of the device and the vector device to observer. One example uses a simple low-pass filter characteristic that is dependant on the orientation of the device and defined in the z-domain as
where
a=|sin(θV/2)|
If the device is facing the observer (θV=0), a flat frequency response results, since HPP(z)|θ
The output of the polar pattern simulator 202 is processed by a sound level modification portion 203. The sound level is changed based on the distance DS. For example, the inverse square law that applies to an acoustic point source can be used, expressed with the following input-output equation:
where x denotes the input, y the output, and DREF the reference distance, e.g. DREF=1 meter. For this particular sound level modification portion 203, the sound power level drops 6 dB if the observer's distance to the portable communication device 20 doubles.
The illustrated example includes a binaural synthesizer 204 for headphone output. The binaural synthesizer 204 can be considered the core element for sound spatialization. It generates binaural audio (i.e., left and right signals dedicated for reproduction on headphones). The binaural synthesizer 204 uses head-related transfer functions (HRTF) expressed in the frequency domain or equivalent head-related impulse responses (HRIR) expressed in the time domain. An exemplary realization of the binaural synthesizer 204 uses finite impulse response filters to generate left and right signals yL (i), yR (i) from an input signal x(i). Such a filter operation can be expressed as
where i denotes the time index, hL,j (θS, φS, DS) and hR,j (θS, φS, DS) denote the head-related impulse responses for the left and right ears, respectively, and M denotes the order of the head-related impulse response.
The above equations are used when the input x(i) is a monaural signal. For the case of an Ambisonic signal set, a corresponding set of virtual loudspeaker signals is created and then convolved with the HRIRs to produce the binaural output signals using a known technique in one example.
A reverberation portion 205 adds reverberation to the binaural signals in this example. Reverberation algorithms are known in the art and a known algorithm is used in one example. The degree of reverberation may depend on the distance DS. In one example a larger value for DS corresponds to more reverberation. Likewise, reverberation may depend on the orientation of the portable communication device 20. If, for example, the device 20 is turned away from the observer, more reverberation may be added, since sound will arrive at the observer's location mostly via reflection rather than via direct path.
The example of
The schematically illustrated modules and portions may comprise hardware (e.g., dedicated circuitry), software, firmware or a combination of two or more of these. Those skilled in the art who have the benefit of this description will realize which combination of these will provide the results required for their particular situation. Additionally, the individual modules and portions are divided for discussion purposes and the functions of each may be accomplished using the hardware, software or firmware that accomplishes a function schematically dedicated to another one of the illustrated portions or modules.
The example of
The information from the position and orientation sensor 88 is provided to the position determination module 24 where it is used as described above.
The sound output from the headphones is adjusted to give the appearance that it is directed from the current position of the portable communication device 20. There are known techniques for adjusting the frequency and amplitude of the output from each of the individual speakers of the headphones to achieve a directional effect. One example includes the binaural sound reproduction techniques mentioned above to enhance the directional effect of the sound output.
Another feature of the example of
As the portable communication device 20 moves to the position 244, the sound direction is continuously updated in one example. Those skilled in the art will realize that “continuously updated” as used in the context of this description will be dependent on the limitations of the programming or processors in the devices being used. When in the position 244, the sound direction 246 is used to control the sound output of the headphones 30 worn by the individual 230 and the sound output provided by the headphones 30′ worn by the individual 230′ is directed as schematically shown at 246′. This example demonstrates how a plurality of individuals each having their own audio device 30 can receive a sound output that has spatial cohesiveness with the position of the portable communication device 20.
Assume, for example, that the portable communication device 20 is in a first position shown at 236 relative to the speakers 30a-30e. The position determining module 24 determines the location and orientation of the portable communication device 20 and the sound control module 26 communicates that information to the speaker control driver 32. The resulting sound from the speakers has a sound direction shown schematically at 240. The manner in which the speakers are controlled causes a sound output that appears to be directed along the sound direction 240 so that the audio or sound output from the speakers 30a-30e and the video of the video output 22 come from approximately the same location as perceived by an observer.
There are known techniques for controlling speakers to achieve a desired sound direction of the sound output from the speakers. Such known techniques are used in one example implementation.
Assume now that an individual moves the portable communication device 20 from the position at 236 as schematically shown by the arrow 242. The portable communication device 20 eventually reaches another position 244. As can be appreciated from
As can be appreciated from the example of
The control over the sound direction is continuously updated in one example so that the sound direction moves as the portable communication device 20 moves.
The example of
In the example of
In this example, as the portable communication device 20 is carried from the position at 36 to the position at 44, the position information is continuously updated and provided to the speaker control driver 32 so that the sound direction is continuously updated and continuously appears to emanate from the approximate location of the portable communication device 20. The sound control module 26 provides updated information to the speaker control driver 32 in one example on a continuous basis. In one example, whenever the position determining module 24 detects some change in position (i.e., location or orientation) of the portable communication device 20, the sound control module 26 provides updated information to the speaker control driver 32 so that any necessary adjustment to the direction of the sound output may be made.
The preceding description is exemplary rather than limiting in nature. Variations and modifications to the disclosed examples may become apparent to those skilled in the art that do not necessarily depart from the essence of this invention. The scope of legal protection given to this invention can only be determined by studying the following claims.
Number | Name | Date | Kind |
---|---|---|---|
4745550 | Witkin et al. | May 1988 | A |
5148493 | Bruney | Sep 1992 | A |
6642893 | Hebron et al. | Nov 2003 | B1 |
6801782 | McCrady et al. | Oct 2004 | B2 |
7027603 | Taenzer | Apr 2006 | B2 |
7480389 | Ricks et al. | Jan 2009 | B2 |
7697675 | Swerup | Apr 2010 | B2 |
8170293 | Tosa et al. | May 2012 | B2 |
8416062 | Austin | Apr 2013 | B2 |
20020136414 | Jordan et al. | Sep 2002 | A1 |
20030069710 | Geddes | Apr 2003 | A1 |
20090316529 | Huuskonen et al. | Dec 2009 | A1 |
20100085702 | Liu | Apr 2010 | A1 |
20100150355 | Kon et al. | Jun 2010 | A1 |
20100278510 | Goossen | Nov 2010 | A1 |
20110039514 | Patnaik et al. | Feb 2011 | A1 |
20110153044 | Lindahl et al. | Jun 2011 | A1 |
20110316967 | Etter | Dec 2011 | A1 |
Number | Date | Country |
---|---|---|
2006165845 | Jun 2006 | JP |
Entry |
---|
International Search Report and Written Opinion of the International Searching Authority for International application No. PCT/US2011/040411 mailed Sep. 23, 2011. |
Number | Date | Country | |
---|---|---|---|
20110316967 A1 | Dec 2011 | US |