Embodiments of the present disclosure relate generally to audio reproduction and, more specifically, to acoustic crosstalk cancellation based upon user position and orientation within an environment.
Audio processing systems use one or more speakers to produce sound in a given space. The one or more speakers generate a sound field, where a user in the environment receives the sound included in the sound field. The one or more speakers reproduce sound based on an input signal that typically includes at least two channels, such as a left channel and a right channel. The left channel is intended to be received by the user's left ear, and the right channel is intended to be received by the user's right ear. Binaural rendering algorithms for producing sound using one or more speakers rely on crosstalk cancellation algorithms. These crosstalk cancellation algorithms rely on measurements taken at a specific location or they rely on a mathematical model that attempts to characterize transmission paths of audio from speakers to the entrance of the ear canals of users.
At least one drawback with conventional audio playback systems relying on convention is crosstalk between left and right channels. In other words, sound produced in the environment by the left channel of the one or more speakers is received by the right ear of the user. Similarly, sound produced in the environment by the right channel of the one or more speakers is received by the left ear of the user. Some audio processing and playback systems utilize conventional crosstalk cancellation techniques. Some techniques are highly focused to work at a specific point in three-dimensional space and break down if the user moves or rotates his or her head. Other techniques rely on parametric models to characterize the geometry of a given three-dimensional space in which a user exists. However, these techniques are either over-turned or offer poor crosstalk cancellation performance. As a result, conventional techniques for reducing crosstalk when playing back audio in a three-dimensional space do not adequately handle the movement of the user.
As the foregoing illustrates, what is needed in the art are more effective techniques for reducing crosstalk when producing sound received by a user in a three-dimensional space in an environment.
Various embodiments disclose a computer-implemented method comprising determining a first position and a first orientation of a user in an environment, identifying a first point based on the first position and the first orientation of the user in a dimensional map, the dimensional map associating a plurality of transfer functions with a corresponding plurality of points corresponding to positions and orientations in a multi-dimensional space, determining at least one crosstalk cancellation filter based on the plurality of transfer functions, generating a plurality of audio signals for a plurality of loudspeakers based on the at least one crosstalk cancelation filter, and transmitting the plurality of audio signals to the plurality of loudspeakers for output.
Further embodiments provide, among other things, one or more non-transitory computer-readable media and systems configured to implement the method set forth above.
At least one technical advantage of the disclosed techniques relative to the prior art is that, with the disclosed techniques, an audio processing system can select transfer functions that are applied to each audio channel that modify the audio output by one or more speakers to improve performance of crosstalk cancellation. The transfer functions modify the audio input that is then played back by one or more speakers of a playback system. By improving the performance of crosstalk cancellation, spectral distortions caused by user movements are reduced. Additionally, the audio intended to be received by the user's left ear and right ear, respectively, more accurately represents the audio input that the audio processing and playback system outputs. These technical advantages provide one or more technological advancements over prior art approaches.
So that the manner in which the above recited features of the various embodiments can be understood in detail, a more particular description of the inventive concepts, briefly summarized above, may be had by reference to various embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of the inventive concepts and are therefore not to be considered limiting of scope in any way, and that there are other equally effective embodiments.
In the following description, numerous specific details are set forth to provide a more thorough understanding of the various embodiments. However, it will be apparent to one of skilled in the art that the inventive concepts may be practiced without one or more of these specific details.
In operation, the audio processing system 100 processes sensor data from the one or more sensors 150 to track the location of one or more listeners within the listening environment. The one or more sensors 150 track the position of a listener's head in three-dimensional space as well as the pitch, yaw, and roll of the listener's head, which is used to locate the relative location of the user's left ear and right ear, respectively. Based upon the position and/or orientation of a listener's head within a three-dimensional environment, the crosstalk cancellation application 120 selects one or more transfer functions 132 utilized for one or more filters 138 that are used to process the audio source 140 for playback by one or more speakers 160 associated with the audio processing system 100. Additionally, should the position of the listener's head in a three-dimensional space change during playback of the audio source 140, crosstalk cancellation application 120 selects a different transfer functions 132 and potentially a different filter 138 that is used to process the audio source 140 for playback via one or more speaker 160.
The computing device 110 is a device that drives speakers 160 to generate, in part, a sound field for a listener by playing back an audio source 140. In various embodiments, the computing device 110 is an audio processing unit in a home theater system, a soundbar, a vehicle system, and so forth. In some embodiments, the computing device 110 is included in one or more devices, such as consumer products (e.g., portable speakers, gaming, etc. products), vehicles (e.g., the head unit of a car, truck, van, etc.), smart home devices (e.g., smart lighting systems, security systems, digital assistants, etc.), communications systems (e.g., conference call systems, video conferencing systems, speaker amplification systems, etc.), and so forth. In various embodiments, the computing device 110 is located in various environments including, without limitation, indoor environments (e.g., living room, conference room, conference hall, home office, etc.), and/or outdoor environments, (e.g., patio, rooftop, garden, etc.).
The processing unit 112 can be any suitable processor, such as a central processing unit (CPU), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), and/or any other type of processing unit, or a combination of processing units, such as a CPU configured to operate in conjunction with a GPU. In general, the processing unit 112 can be any technically feasible hardware unit capable of processing data and/or executing software applications.
Memory 114 can include a random-access memory (RAM) module, a flash memory unit, or any other type of memory unit or combination thereof. The processing unit 112 is configured to read data from and write data to the memory 114. In various embodiments, the memory 114 includes non-volatile memory, such as optical drives, magnetic drives, flash drives, or other storage. In some embodiments, separate data stores, such as an external data stores included in a network (“cloud storage”) can supplement the memory 114. The crosstalk cancellation application 120 within the memory 114 can be executed by the processing unit 112 to implement the overall functionality of the computing device 110 and, thus, to coordinate the operation of the audio processing system 100 as a whole. In various embodiments, an interconnect bus (not shown) connects the processing unit 112, the memory 114, the speakers 160, the sensors 150, and any other components of the computing device 110.
The crosstalk cancellation application 120 determines the location of a listener within a listening environment and selects parameters for one or more filters 138, such as one or more transfer functions 132, to generate a sound field for the location of the listener. The transfer functions 132 are selected to minimize or eliminate crosstalk. The transfer functions 132 cause the filters 138 to produce audio in the sound field so that the left channel is perceived by the left ear of the listener with minimal crosstalk from the right channel. Similarly, the transfer functions 132 cause the filters 138 to produce audio in the sound field so that the right channel is perceived by the right ear of the listener with minimal crosstalk from the left channel. In various embodiments, the crosstalk cancellation application utilizes sensor data from sensors 150 to identify the position of the listener, and specifically the head of the listener. Based upon the position and orientation of the listener, crosstalk cancellation application 120 selects appropriate filters 138 and transfer functions 132 that are utilized to process the audio source 140 for playback. In some embodiments, the crosstalk cancellation application 120 sets the parameters for multiple filters 138 corresponding to multiple speakers 160. For example, a first transfer function 132 can be utilized for a first filter 138 that is utilized for audio played back by a first speaker 160, and a second transfer function 132 is utilized by a second filter 138 that is utilized for audio played back by a second speaker 160. In other embodiments, a filter network is utilized such that a signal used to drive each speaker 160 is passed through a network of multiple filters. Additionally or alternatively, the crosstalk cancellation application 120 tracks the positions and orientations of multiple listeners.
The filters 138 include one or more filters that modify an input audio source 140. In various embodiments, a given filter 138 modifies the input audio signal by modifying the energy within a specific frequency range, adding directivity information, and so forth. For example, the filter 138 can include filter parameters, such as a set of values that modify the operating characteristics (e.g., center frequency, gain, Q factor, cutoff frequencies, etc.) of the filter 138. In some embodiments, the filter parameters include one or more digital signal processing (DSP) coefficients that steer the generated soundwave in a specific direction. In such instances, the generated filtered audio signal is used to generate a soundwave in the direction specified in the filtered audio signal. For example, the one or more speakers 160 reproduce audio using one or more filtered audio signals to generate a sound field. In some embodiments, the crosstalk cancellation application 120 sets separate filter parameters, such as selecting a different transfer function 132 for separate filters 138 for different speakers 160. In such instances, one or more speakers 160 generate the sound field using the separate filters 138. For example, each filter 138 can generate a filtered audio signal for a single speaker 160 within the listening environment.
Transfer functions 132 include one or more transfer functions that are utilized to configure one or more filters 138 selected by crosstalk cancellation application 120 to process an input signal, such as a channel of the audio source 140, to produce an output signal used to driver a speaker 160. Different transfer functions 132 are utilized depending upon the position and orientation of a listener in a three-dimensional space.
In some embodiments, the dimensional map 134 maps a given position within a three-dimensional space, such as a vehicle interior, to filter parameters for one or more filters 138, such as one or more finite impulse response (FIR) filters. In various embodiments, the crosstalk cancellation application 120 determines a position and orientation of the listener based on data from sensors 150 and identifies transfer functions 132 or other filter parameters for filters 138 corresponding to each speaker 160. The crosstalk cancellation application 120 then updates the filter parameters for a specific speaker (e.g., a first filter 138(1) for a first speaker 160(1)) when the head of the listener moves. For example, the crosstalk cancellation application 120 can initially generate filter parameters for a set of filters 138. Upon determining that the head of listener has moved to a new position or orientation, the crosstalk cancellation application 120 then determines whether any of the speakers 160 require updates to the corresponding filters 138. The crosstalk cancellation application 120 updates the filter parameters for any filter 138 that requires updating. In some embodiments, crosstalk cancellation application 120 generates each of the filters 138 independently. For example, upon determining that a listener has moved, the crosstalk cancellation application 120 can update the filter parameters for a filter 138 (e.g., 138(1) for a specific speaker 160 (e.g., 160(1)). Alternatively, the crosstalk cancellation application 120 updates multiple filters 138.
The dimensional map 134 includes a plurality of points that represent a position and orientation in a three-dimensional space (e.g., points within a six-dimensional space identified by x, y, and z position coordinates and three roll, pitch, and yaw orientations). The dimensional map 134 maps position relative to a reference position in a given environment. The dimensional map 134 further maps orientation relative to a reference orientation in the environment. The dimensional map 134 can be generated by conducting acoustic measurements in the three-dimensional space for filter parameters, such as transfer functions 132, that minimize or eliminate crosstalk. The dimensional map 134 is then saved on the audio processing system 100 and used to configure filters 138 utilized by computing device 110 to minimize or eliminate crosstalk during playback of an audio source 140. In some embodiments, the dimensional map 134 includes specific coordinates relative to a reference point. For example, the dimensional map 134 can store the potential positions and orientations of the head of a listener as a distance and angle from a specific reference point. In some embodiments, the dimensional map 134 can include additional orientation information, such as pitch, yaw, and roll, that characterize the orientation of the head of the listener. Dimensional map 134 could also include as a set of angles (e.g., μ, φ, ψ) relative to a normal orientation of the head of the listener. In such instances, a respective position and orientation defined by a point in dimensional map 134 is associated with one or more transfer functions 132 utilized for a filter 138. In one example, the dimensional map 134 is structured as a set of points, each of which is associated with a particular position and orientation in an environment. Each of the points is associated with one or more filters 138 and/or transfer functions 132 that can be utilized for each of the speakers 160 to reduce or eliminate crosstalk.
Crosstalk cancellation application 120 selects transfer functions 132 to configure filters 138, where the transfer functions 132 are identified by the dimensional map 134. The transfer functions 132 are used to configure filters 138 that process an audio source 140. Transfer functions 132 are identified based on a mathematical distance, such as a barycentric distance, of a set of points characterizing the position and orientation of listener's head to one or more of the points from the set of points in the dimensional map 134. In one example, a given position and orientation of a user is characterized by coordinates in six-dimensional space. In some embodiments, a nearest set of points to the coordinates is then identified within the dimensional map 134 using a graph search algorithm such as a Delaunay triangulation. A barycentric distance to each of the nearest set of points is determined, and the transfer functions 132 associated with the closest point in the dimensional map 134 are used to configure filters 138 that filter the audio signal 140 that is played back.
As another example, a simplified approach to identifying transfer functions 132 includes reducing the number of dimensions of a user's position and orientation that are considered when identifying a set of transfer functions specified by the dimensional map 134. As noted above, the dimensional map 134 includes a set of points in six-dimensional space to account for three parameters representing position and three parameters representing orientation. To reduce mathematical complexity, a reduced set of parameters representing the position and orientation of the user can be considered. For example, one or more of the parameters representing orientation can be removed and a nearest set of points are identified based on the mathematical distance from coordinates characterizing the position and orientation of listener's head to one or more of the points from the set of points in the dimensional map 134. Examples of coordinates that can be removed include yaw, pitch, and/or roll angles. In one scenario, only the position of the user's head and a yaw angle are considered, which reduces complexity to a consideration of four dimensions. As another example, only the position of the user's head along with yaw and pitch angle are considered, which reduces complexity to five dimensions.
As another example, an alternative simplified approach to identifying transfer functions 132 includes reducing dimensionality of the dimensional map 134. As noted above, the dimensional map 134 includes a set of points in six-dimensional space to account for three parameters representing position and three parameters representing orientation. To reduce mathematical complexity, a dimensional map 134 that includes a set of points mapped in three, four, or five dimensional space can be generated and utilized. For example, the dimensional map 134 can map only the position of the user's head in three-dimensional space and a yaw angle representing orientation, resulting in a four-dimensional map. As another example, the dimensional map 134 maps only the position of the user's head and two parameters characterizing orientation, which reduces complexity of the dimensional map 134 to five dimensions.
As another example of a simplified approach to reducing dimensionality of the dimensional map 134, is to use multiple dimensional maps 134 that include three dimensions representing position in three-dimensional space can be utilized. Each of the three-dimensional maps are associated with a particular orientation parameter or a range of the orientation parameter. For example, each of the three-dimensional maps are associated with a yaw angle or a range of yaw angles. In one scenario, a first three-dimensional map is associated with a yaw angle of zero to ten degrees, a second three-dimensional map is associated with a yaw angle of greater than ten to twenty degrees, and so on. In this approach, based on a detected yaw angle of the user's head, a three-dimensional map is selected. Then, based on coordinates based on the user's detected position, a point corresponding to transfer functions 132 within the three-dimensional map is identified, and the transfer functions 132 are used to configure a filter 138.
The sensors 150 include various types of sensors that acquire data about the listening environment. For example, the computing device 110 can include auditory sensors to receive several types of sound (e.g., subsonic pulses, ultrasonic sounds, speech commands, etc.). In some embodiments, the sensors 150 includes other types of sensors. Other types of sensors include optical sensors, such as RGB cameras, time-of-flight cameras, infrared cameras, depth cameras, a quick response (QR) code tracking system, motion sensors, such as an accelerometer or an inertial measurement unit (IMU) (e.g., a three-axis accelerometer, gyroscopic sensor, and/or magnetometer), pressure sensors, and so forth. In addition, in some embodiments, sensor(s) 150 can include wireless sensors, including radio frequency (RF) sensors (e.g., sonar and radar), and/or wireless communications protocols, including Bluetooth, Bluetooth low energy (BLE), cellular protocols, and/or near-field communications (NFC). In various embodiments, the crosstalk cancellation application 120 uses the sensor data acquired by the sensors 150 to identify transfer functions 132 utilized for filters 138. For example, the computing device 110 includes one or more emitters that emit positioning signals, where the computing device 110 includes detectors that generate auditory data that includes the positioning signals. In some embodiments, the crosstalk cancellation application 120 combines multiple types of sensor data. For example, the crosstalk cancellation application 120 can combine auditory data and optical data (e.g., camera images or infrared data) in order to determine the position and orientation of the listener at a given time.
Crosstalk cancellation application 120 determines the position and orientation of the head of the listener 202 based on sensor data from sensors 150, such as one or more cameras or other devices that detect a position or orientation of the listener 202. Crosstalk cancellation application 120 further determines, based on a dimensional map 134, the distance of the parameters characterizing the position and orientation of head of the listener 202 to one or more points within the dimensional map 134. In one example, crosstalk cancellation application 120 calculates a mathematical distance, such as a barycentric distance or a Euclidean distance, of the position and orientation of the head of the listener 202 from points within the dimensional map 134. The crosstalk cancellation application 120 then identifies transfer functions 132 associated with the nearest point according to the calculated barycentric or Euclidean distance.
In the example of
V1 and V2 represent respective filtered portions of the audio source 140a that are filtered by filters H1,1 and H1,2, and output to one or more speakers 160, respectively. V3 and V4 represent respective filtered portions of the audio source 140b that are filtered by filters H2,1 and H2,2, and output to one or more speakers 160, respectively. Therefore, when environment alters the signals output by the filters and played back by one or more speakers 160 according to C1,1, C1,2, C2,1, and C2,2, the signals reaching the ears of the listener 202 have reduced or eliminated crosstalk. As shown in
As noted above, crosstalk cancellation application 120 selects transfer functions 132 that are used to configure a set of filters H1,1, H1,2, H2,1, and H2,2 that filter audio source 140a and audio source 140b based on the position and orientation of the listener 202. The position and orientation of the listener 202 are determined based upon sensor data from one or more sensors 150. As the position and/or orientation of the listener 202 changes, crosstalk cancellation application 120 updates the transfer functions 132 used to configure the filters H1,1, H1,2, H2,1, and H2,2 by determining whether the movement of the listener 202 to an updated position or orientation corresponds to a different set of transfer functions 132 defined by the dimensional map 134. In this way, the crosstalk cancellation application 120 performs crosstalk cancellation based on the current position and orientation of the listener 202 as well as when the listener 202 adjusts position and/or orientation within a given three-dimensional space characterized by the dimensional map 134.
Method 400 begins at step 402, where crosstalk cancellation application 120 determines a position and an orientation of the listener 202 within an environment. The environment includes a space in which audio is played back by one or more speakers 160, such as the interior of a vehicle or any other interior or exterior environment. Crosstalk cancellation application 120 determines the position and orientation of the listener 202 based upon sensor data obtained from sensors 150 associated with an audio processing system 100. As noted above, the sensors 150 include optical sensors, pressure sensors, proximity sensors, and other sensors that obtain information about the environment and the position and orientation of the listener 202 within the environment. The position of the listener 202 is determined relative to a reference position within the environment based upon sensor data from the sensors 150. The orientation of the listener 202 is also determined relative to a reference orientation within the environment. In some embodiments, crosstalk cancellation application 120 determines the position and orientation of the head and/or ears of the listener 202 based upon the sensor data.
At step 404, crosstalk cancellation application 120 identifies a point within a dimensional map 134 based on the position and orientation of the listener 202 within the environment. In one example, a given position and orientation of a user is characterized by coordinates in six-dimensional space. A nearest point to the coordinates is then identified within the dimensional map 134.
In some embodiments, crosstalk cancellation application 120 selects transfer functions 132 associated with the closest point in the dimensional map 134 are used to configure filters 138 that filter the audio signal 140 that is played back. In other embodiments, a simplified approach to identifying a point based on the position and orientation of the listener 202 includes reducing the number of dimensions of a user's position and orientation that are considered when identifying a point associated with the listener 202 in the dimensional map 134. To reduce mathematical complexity, a reduced set of parameters representing the position and orientation of the user can be considered. For example, one or more of the parameters representing orientation can be removed and a nearest set of points are identified based on the mathematical distance from coordinates characterizing the position and orientation of listener's head to one or more of the points from the set of points in the dimensional map 134. Examples of coordinates that can be removed include yaw, pitch, and/or roll angles. As another example, an alternative simplified approach to identifying transfer functions 132 includes reducing dimensionality of the dimensional map 134. As noted above, the dimensional map 134 includes a set of points in six-dimensional space to account for three parameters representing position and three parameters representing orientation. To reduce mathematical complexity, a dimensional map 134 that includes a set of points mapped in three, four, or five dimensional space can be generated and utilized. For example, the dimensional map 134 can map only the position of the user's head in three-dimensional space and a yaw angle representing orientation, resulting in a four-dimensional map. As another example, the dimensional map 134 maps only the position of the user's head and two parameters characterizing orientation, which reduces complexity of the dimensional map 134 to five dimensions. In any of the above scenarios, the crosstalk cancellation application 120 identifies a point within the dimensional map 134 that is closest to the point characterizing at least some parameters corresponding to the position and orientation of the listener 202.
At step 406, crosstalk cancellation application 120 identifies transfer functions 132 specified by the point in the dimensional map 134 based on the position and orientation of the listener 202. The transfer functions 132 are used to configure one or more filters 138 that reduce or eliminate crosstalk from audio that is played back by one or more speakers 160. In other words, the transfer functions 132 are used to model the output of a filter 138 given a particular audio signal that is provided as an input to the filter 138.
At step 408, crosstalk cancellation application 120 configures the one or more filters 138 using the transfer functions 132 identified at step 406. Crosstalk cancellation application 120 applies the transfer functions 132 to the filters 138 that are used to filter audio signals that are in turn provided to one or more speakers 160 for playback within the environment.
At step 410, crosstalk cancellation application 120 generates audio signals for playback based on the filters 138 configured with the identified transfer functions 132. The audio signals are generated based upon an audio source 140 that is being played back by audio processing system 100 within the environment, such as a song or other audio input provided to the audio processing system 100. The audio source 140 includes a left channel and a right channel. Crosstalk cancellation application 120 filters the audio source 140 using the filters 138 that are configured with the transfer functions 132 that were selected based upon the position and orientation of the listener 202. When played back in the environment, the filtered audio signals arrive at the left and right ear of the listener 202, respectively, with crosstalk being reduced or eliminated.
At step 412, crosstalk cancellation application 120 outputs the filtered audio signals to one or more speakers 160 associated with audio processing system 100. One or more speakers 160 play back the filtered audio signals in the environment based on the filtered audio signals. The one or more speakers 160 include one or more speakers corresponding to a left channel of the audio processing system 100 and one or more speakers corresponding to a right channel of the audio processing system 100.
At step 414, crosstalk cancellation application 120 determines whether there is a change in the position or orientation of the listener 202. If there is a change in the position or orientation of the listener 202, method 400 returns to step 402, where crosstalk cancellation application 120 determines an updated position and orientation of the listener 202 and identifies new transfer functions 132 with which to update the filters 138. If the position and orientation of the listener 202 is unchanged, the method 400 returns to step 412, where crosstalk cancellation application 120 continues to output audio signals based on crosstalk cancellation application 120 using the transfer functions 132 identified at step 406.
Method 500 begins at step 402, where crosstalk cancellation application 120 determines a position and orientation of a listener 202 within an environment. The environment includes a space in which audio is played back by one or more speakers 160, such as the interior of a vehicle, a room within building, or an exterior environment. Crosstalk cancellation application 120 determines the position and orientation of a listener 202 based upon sensor data obtained from sensors 150 associated with an audio processing system 100. As noted above, the sensors 150 include optical sensors, pressure sensors, proximity sensors, and other sensors that obtain information about the environment and the position and orientation of the listener 202 within the environment. The position of the listener 202 is determined relative to a reference position within the environment based upon sensor data from the sensors 150. The orientation of the listener 202 is also determined relative to a reference orientation within the environment. In some embodiments, crosstalk cancellation application 120 determines the position and orientation of the head and/or ears of the listener 202 based upon the sensor data.
At step 504, crosstalk cancellation application 120 selects a dimensional map 134 from multiple dimensional maps 134. As noted above, the crosstalk cancellation application 120 can utilize multiple dimensional maps 134 that include three dimensions representing position in three-dimensional space. Each of the three-dimensional maps are associated with a particular orientation parameter or a range of the orientation parameter. For example, each of the three-dimensional maps are associated with a yaw angle or a range of yaw angles. Accordingly, the crosstalk cancellation application 120 selects the dimensional map 134 corresponding to the listener 202 yaw angle detected based on sensor data from the sensors 150 or based on another orientation parameter that is utilized for the multiple dimensional maps 134. As additional examples, the multiple dimensional maps 134 can include four or five dimensional maps representing three position parameters and one or two orientation parameters, respectively.
At step 506, crosstalk cancellation application 120 identifies a point within the selected dimensional map 134 that corresponds to the position of and, in some implementations, some of the orientation parameters corresponding to the orientation of the listener 202 within the environment. For example, assuming that a dimensional map 134 based on yaw angle is selected, crosstalk cancellation application 120 identifies a point characterizing the position and the remaining orientation parameters, such as roll and pitch angles. The crosstalk cancellation application 120 then identifies a point within the dimensional map 134 that is nearest the coordinates representing the position and the remaining orientation parameters characterizing the position and orientation of the listener 202.
At step 508, crosstalk cancellation application 120 identifies transfer functions 132 specified by the point in the dimensional map 134 based on the position and orientation of the listener 202. The transfer functions 132 are used to configure one or more filters 138 that reduce or eliminate crosstalk from audio that is played back by one or more speakers 160. In other words, the transfer functions 132 are used to model the output of a filter 138 given a particular audio signal that is provided as an input to the filter 138.
At step 510, crosstalk cancellation application 120 configures the one or more filters 138 using the transfer functions 132 identified at step 406. Crosstalk cancellation application 120 applies the transfer functions 132 to the filters 138 that are used to filter audio signals that are in turn provided to one or more speakers 160 for playback within the environment.
At step 512, crosstalk cancellation application 120 generates audio signals for playback based on the filters 138 configured with the identified transfer functions 132. The audio signals are generated based upon an audio source 140 that is being played back by audio processing system 100 within the environment, such as a song or other audio input provided to the audio processing system 100. The audio source 140 includes a left channel and a right channel. Crosstalk cancellation application 120 filters the audio source 140 using the filters 138 that are configured with the transfer functions 132 that were selected based upon the position and orientation of the listener 202. When played back in the environment, the filtered audio signals will arrive at the left and right ear of the listener 202, respectively, with crosstalk being reduced or eliminated.
At step 514, crosstalk cancellation application 120 outputs the filtered audio signals to one or more speakers 160 associated with audio processing system 100. One or more speakers 160 play back the filtered audio signals in the environment based on the filtered audio signals. The one or more speakers 160 include one or more speakers corresponding to a left channel of the audio processing system 100 and one or more speakers corresponding to a right channel of the audio processing system 100.
At step 516, crosstalk cancellation application 120 determines whether there is a change in the position or orientation of the listener 202. If there is a change in the position or orientation of the listener 202, method 500 returns to step 502, where crosstalk cancellation application 120 determines an updated position and orientation of the listener 202 and identifies new transfer functions 132 with which to update the filters 138. If the position and orientation of the listener 202 is unchanged, the method 500 returns to step 514, where crosstalk cancellation application 120 continues to output audio signals based on crosstalk cancellation application 120 using the transfer functions 132 identified at step 508.
In sum, a crosstalk cancellation application configures a set of filters that are utilized to perform crosstalk cancellation between the left and right channels of an audio source that is played back by one or more speakers. The crosstalk cancellation application configures the set of filters by selecting transfer functions utilized for each of the filters in the set of filters. The transfer functions are selected by identifying the position and orientation of the user's head within a three-dimensional space using sensor data from one or more sensors. A dimensional map specifies a set of points that are respectively associated with transfer functions that are used to configure the filters. A point in the dimensional map is identified that is closest to the position and orientation of the user's head. The transfer functions that are associated with the identifies point are utilized for each of the filters are identified. The filters, utilizing the identified transfer functions, filter one or more signals corresponding to an audio source that are used to drive one or more speakers to create a sound field. The one or more speakers play back respective filtered signals. When altered by the environment, the filtered signals, once reaching the ears of a listener, have reduced or eliminated crosstalk
At least one technical advantage of the disclosed techniques relative to the prior art is that, with the disclosed techniques, an audio processing system can select transfer functions that are applied to each audio channel that modify the audio output by one or more speakers to improve performance of crosstalk cancellation. The transfer functions modify the audio input that is then played back by one or more speakers of a playback system. By improving the performance of crosstalk cancellation, spectral distortions caused by user movements are reduced. Additionally, the audio intended to be received by the user's left ear and right ear, respectively, more accurately represents the audio input that the audio processing and playback system outputs. These technical advantages provide one or more technological advancements over prior art approaches.
Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present invention and protection.
The descriptions of the various embodiments have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.
Aspects of the present embodiments may be embodied as a system, method, or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module,” a “system,” or a “computer.” In addition, any hardware and/or software technique, process, function, component, engine, module, or system described in the present disclosure may be implemented as a circuit or set of circuits. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine. The instructions, when executed via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
While the preceding is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
This application claims benefit of the United States Provisional Patent Application titled “DYNAMIC CROSSTALK CANCELATION USING RESPONSES INSIDE A VOLUME,” filed Jan. 3, 2024, and having Ser. No. 63/617,141. The subject matter of this related application is hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63617141 | Jan 2024 | US |