The present disclosure is generally related to simulating acoustic output, and more particularly, to simulating acoustic output at a location corresponding to source position data.
Automobile speaker systems can provide announcement audio, such as automatic driver assistance system (ADAS) alerts, navigation alerts, and telephony audio, to occupants from static (e.g., fixed) permanent speakers. Permanent speakers project sound from predefined fixed locations. Thus, for example, ADAS alerts are output from a single speaker (e.g., a driver's side front speaker) or from a set of speakers based on a predefined setting. In other examples, navigation alerts and telephone calls are projected from fixed speaker locations that provide the announcement audio throughout a vehicle.
In selected examples, a method includes receiving an audio signal and source position data associated with the audio signal is received. The method also includes applying a set of speaker driver signals to a plurality of speakers. The set of speaker driver signals causes the plurality of speakers to generate acoustic output that simulates output of the audio signal by an audio source at a location corresponding to the source position data.
In another aspect, an apparatus includes a plurality of speakers and an audio signal processor configured to receive an audio signal and source position data associated with the audio signal. The audio signal processor is also configured to apply a set of speaker driver signals to the plurality of speakers. The set of speaker driver signals causes the plurality of speakers to generate acoustic output that simulates output of the audio signal by an audio source at a location corresponding to the source position data.
In another aspect, a machine-readable storage medium has instructions stored thereon to simulate acoustic output. The instructions, when executed by a processor, cause the processor to receive an audio signal and source position data associated with the audio signal. The instructions, when executed by the processor, also cause the processor to apply a set of speaker driver signals to a plurality of speakers. The set of speaker driver signals causes the plurality of speakers to generate acoustic output that simulates output of the audio signal by an audio source at a location corresponding to the source position data.
Various other objects, features and attendant advantages will become fully appreciated as the same becomes better understood when considered in conjunction with the accompanying drawings such that like reference characters designate the same or similar parts throughout the several views, and wherein:
In selected examples, an audio system dynamically selects and precisely simulates announcement audio in an acoustic space. Utilizing an x-y coordinate position grid outlining an acoustic space, the audio system device drives speaker driver signals to simulate acoustic output at precise locations in response to prompts by, for example, an ADAS, a navigation system, or mobile device. In one aspect, the audio system relocates the simulation locations over the acoustic space, whether inside or outside a vehicle that is in motion or that is at rest, in real-time. Advantageously, the audio system supports ADAS, navigation, and telephone technologies in delivering greater customization and improvements to the vehicle transport experience.
The vehicle compartment shown in
As shown in
The vehicle compartment further includes two fixed speakers 132, 133 located on or in the driver side and front passenger side doors. In other examples, a greater number of speakers are located in different locations around the vehicle compartment. In some implementations, the fixed speakers 132, 133 are driven by a single amplified signal from the audio system 100, and a passive crossover network is embedded in the fixed speakers 132, 133 and used to distribute signals in different frequency ranges to the fixed speakers 132, 133. In other implementations, the amplifier module of the audio system 100 supplies a band-limited signal directly to each fixed speaker 132, 133. The fixed speakers 132, 133 can be full range speakers.
In some examples, each of the individual speakers 122, 123, 132, 133 corresponds to an array of speakers that enables more sophisticated shaping of sound, or a more economical use of space and materials to deliver a given sound pressure level. The headrest speakers 122, 123 and the fixed speakers 132, 133 are collectively referred to herein as real speakers, real loudspeakers, fixed speakers, or fixed loudspeakers interchangeably.
The grid 140 illustrates an acoustic space within which any location can be dynamically selected by the audio system 100 to generate acoustic output. In the example of
In
The audio system 100 determines a set of speaker driver signals 220 to apply to speakers 221 (e.g., speakers 122, 123, 132, 133;
Advantageously, in particular examples, the audio system 100 of the present disclosure dynamically selects source positions from which audio output is perceived to be projected in real-time (or near-real-time), such as when prompted by another device or system. The real and virtual speakers simulate audio energy output to appear to project from these specific and discrete locations.
For example,
In accordance with the techniques of the present disclosure, the virtual speakers also have the ability to precisely simulate acoustic output at a specific location in response to, and when prompted by, multiple types of systems, including but not limited to the ADAS 201, the navigation system 202, and the mobile device 203 of
As shown in
It should be noted that, in particular aspects, various signals assigned to each real and virtual speaker are superimposed to create an output signal, and some of the energy from each speaker can travel omnidirectionally (e.g., depending on frequency and speaker design). Accordingly, the arrows illustrated in
In some examples, the headrest speakers 122, 123 are used, with appropriate signal processing, to expand the spaciousness of the sound perceived by the listener 150, and more specifically, to control a sound stage. Perception of a sound stage, envelopment, and sound location is based on level and arrival-time (phase) differences between sounds arriving at both of the listener's ears. The sound stage is controlled, in particular examples, by manipulating audio signals produced by the speakers to control such inter-aural level and time differences. As described in commonly assigned U.S. Pat. No. 8,325,936, which is incorporated herein by reference, headrest speakers as well as fixed non-headrest speakers can be used to control spatial perception.
The listener 150 hears the real and virtual speakers near his or her head. Acoustic energy from the various real and virtual speakers will differ due to the relative distances between the speakers and the listener's ears, as well as due to differences in angles between the speakers and the listener's ears. Moreover, for some listeners, the anatomy of outer ear structures is not the same for the left and right ears. Human perception of the direction and distance of sound sources is based on a combination of arrival time differences between the ears, signal level differences between the ears, and the particular effect that the listener's anatomy has on sound waves entering the ears from different directions, all of which is also frequency-dependent. The combination of these factors at both ears, for an audio source at a particular x-y location of the grid 140 of
In a first illustrative non-limiting example, acoustic output 230 corresponding to the announcement audio that is perceived to originate from the location S1 (to the front-right of the listener 150) relates to the navigation system 202 informing the listener 150 that he or she is to make a right turn. Advantageously, because the simulated announcement audio is projected from a location in front of and to the right of the listener 150, the listener 150 quickly and easily comprehends the right-turn travel direction instruction with reduced thought or effort.
In
As a second illustrative non-limiting example, the acoustic output 230 projected from the example location S2 (behind and slightly to the left of the listener 150) relates to audio announcement output from the ADAS 201 warning the listener 150 that there is a vehicle in the listener's blind spot. Advantageously, the listener 150 would now quickly and easily know not to switch lanes to the left at that particular moment in time.
As a third illustrative non-limiting example, the location S2 relates to the audio announcement output from the mobile device 203, such as a mobile phone. Advantageously, as the acoustic output 230 is projected near the listener's ear, the listener 150 can take the call with greater privacy, and without disturbing other passenger's in the vehicle. In this example, listener position data indicating a location of the listener 150 within the vehicle compartment is provided along with the source position data 212 (e.g., so that the acoustic output for the telephone call is projected near the correct driver/passenger's ears).
As a fourth illustrative non-limiting example, the listener 150 receives the acoustic output 230 simulated from the location S3 (outside the vehicle). In this example, the acoustic output 230 corresponds to announcement audio from the ADAS 201 informing the listener 150 that a pedestrian (or other object) has been detected to be walking (or moving) towards the vehicle from the location S3. Advantageously, the listener 150 can quickly and easily know to take precautions and avoid a collision with the pedestrian (or other object).
In one aspect, the audio system 100 is used in conjunction with the ADAS system 201 to dynamically (e.g., in real-time or near-real-time) simulate acoustic output 230 from any location within the grid 140 for features including, but not limited to, rear cross traffic, blind spot recognition, lane departure warnings, intelligent headlamp control, traffic sign recognition, forward collision warnings, intelligent speed control, pedestrian detection, and low fuel. In another aspect, the audio system 100 is used in combination with the navigation system 202 to dynamically project audio output from any source position such that navigation commands or driving direction information can be simulated at precise locations within the grid 140. In a third aspect, the audio system 100 is used in conjunction with the mobile device 203 to dynamically simulate audio output from any source position such that a telephone call is presented in close proximity to any particular passenger sitting in any of the car seats within the vehicle compartment.
In the example of
The up-mixer module 503 utilizes coordinates provided in the audio source position data to generate a vector of n gains, which assign varying levels of the input (announcement audio) signal to each of the up-mixed intermediate components C1-Cn. Next, as shown in
Binaural filters 5051-505p then convert weighted sums of the intermediate speaker signal components D1-Dm into binaural image signals I1-Ip, where p is the total number of virtual speakers. The binaural image signals I1-Ip correspond to sound coming from the virtual speakers (e.g., speakers 301-303;
The fixed speakers 122, 123, 132, and 133 transduce the speaker driver signals HL, HR, DL, and DR and thereby reproduce the announcement audio such that it is perceived by the listener as coming from the precise location indicated in the audio source position data.
One example of such a re-mixing procedure is described in commonly-assigned U.S. Pat. No. 7,630,500, which is incorporated herein by reference. In the example of
It should also be noted that while
The method 600 includes receiving an audio signal and source position data associated with the audio signal, at 602. For example, as described with reference to
The method 600 also includes applying a set of speaker driver signals to a plurality of speakers, at 604. The set of speaker driver signals causes the plurality of speakers to generate acoustic output that simulates output of the audio signal by an audio source at a location corresponding to the source position data. For example, as described with reference to
While examples have been discussed in which headrest mounted speakers are utilized, in combination with binaural filtering, to provide virtualized speakers, in some cases, the speakers may be located elsewhere in proximity to an intended position of a listener's head, such as in the vehicle's headliner, visors, or in the vehicle's B-pillars. Such speakers are referred to generally as “near-field speakers.” In some examples, as shown in
In some examples, implementations of the techniques described herein include computer components and computer-implemented steps that will be apparent to those skilled in the art. In some examples, one or more signals or signal components described herein include a digital signal. In some examples, one or more of the system components described herein are digitally controlled, and the steps described with reference to various examples are performed by a processor executing instructions from a memory or other machine-readable or computer-readable storage medium.
It should be understood by one of skill in the art that the computer-implemented steps can be stored as computer-executable instructions on a computer-readable medium such as, for example, floppy disks, hard disks, optical disks, flash memory, nonvolatile memory, and random access memory (RAM). In some examples, the computer-readable medium is a computer memory device that is not a signal. Furthermore, it should be understood by one of skill in the art that the computer-executable instructions can be executed on a variety of processors such as, for example, microprocessors, digital signal processors, gate arrays, etc. For ease of description, not every step or element of the systems and methods described above is described herein as part of a computer system, but those skilled in the art will recognize that each step or element can have a corresponding computer system or software component. Such computer system and/or software components are therefore enabled by describing their corresponding steps or elements (that is, their functionality) and are within the scope of the disclosure.
Those skilled in the art can make numerous uses and modifications of and departures from the apparatus and techniques disclosed herein without departing from the inventive concepts. For example, components or features illustrated or describe in the present disclosure are not limited to the illustrated or described locations. As another example, examples of apparatuses in accordance with the present disclosure can include all, fewer, or different components than those described with reference to one or more of the preceding figures. The disclosed examples should be construed as embracing each and every novel feature and novel combination of features present in or possessed by the apparatus and techniques disclosed herein and limited only by the scope of the appended claims, and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
6577738 | Norris et al. | Jun 2003 | B2 |
6778073 | Lutter | Aug 2004 | B2 |
7630500 | Beckman et al. | Dec 2009 | B1 |
7792674 | Dalton, Jr. et al. | Sep 2010 | B2 |
8218783 | Katzer et al. | Jul 2012 | B2 |
8325936 | Eichfeld et al. | Dec 2012 | B2 |
8483413 | Hartung et al. | Jul 2013 | B2 |
8724827 | Hartung et al. | May 2014 | B2 |
9049534 | Eichfeld et al. | Jun 2015 | B2 |
9100748 | Hartung et al. | Aug 2015 | B2 |
9100749 | Hartung et al. | Aug 2015 | B2 |
9167344 | Choueiri | Oct 2015 | B2 |
9338554 | Christoph | May 2016 | B2 |
9357304 | Christoph | May 2016 | B2 |
20030142835 | Enya et al. | Jul 2003 | A1 |
20040196982 | Aylward et al. | Oct 2004 | A1 |
20050213528 | Aarts et al. | Sep 2005 | A1 |
20060045294 | Smyth | Mar 2006 | A1 |
20070006081 | Maehata et al. | Jan 2007 | A1 |
20070053532 | Elliott et al. | Mar 2007 | A1 |
20080273722 | Aylward et al. | Nov 2008 | A1 |
20100158263 | Katzer et al. | Jun 2010 | A1 |
20130136281 | Steffens | May 2013 | A1 |
20130177187 | Mentz | Jul 2013 | A1 |
20130178967 | Mentz | Jul 2013 | A1 |
20140119581 | Tsingos et al. | May 2014 | A1 |
20140133658 | Mentz et al. | May 2014 | A1 |
20140133672 | Lakkundi | May 2014 | A1 |
20140334637 | Oswald et al. | Nov 2014 | A1 |
20140334638 | Barksdale et al. | Nov 2014 | A1 |
20140348354 | Christoph et al. | Nov 2014 | A1 |
20150242953 | Suiter | Aug 2015 | A1 |
20160142852 | Christoph | May 2016 | A1 |
Number | Date | Country |
---|---|---|
1858296 | Nov 2007 | EP |
2445759 | May 2012 | EP |
2816824 | Dec 2014 | EP |
2009012496 | Jan 2009 | WO |
2012141057 | Oct 2012 | WO |
2014035728 | Mar 2014 | WO |
2014043501 | Mar 2014 | WO |
2014159272 | Oct 2014 | WO |
Entry |
---|
International Search Report and Written Opinion dated Mar. 16, 2017 for PCT/US2016/046660. |
Invitation to Pay Additional Fees dated Nov. 7, 2016 for PCT/US2016/046660. |
International Search Report and Written Opinion dated Oct. 7, 2016 for PCT/US2016/040270. |
International Search Report and Written Opinion dated Oct. 6, 2016 for PCT/US2016/040285. |
Number | Date | Country | |
---|---|---|---|
20170013385 A1 | Jan 2017 | US |