The described technology is generally directed to transmitting audio signals, and more specifically to systems and methods of delivering audio to a user's ear from one or more transducers spaced apart from the user's ear.
The human auditory system is able to determine a location of sound sources by analyzing acoustic cues in the sound signals reaching the entrance of both ears. Acoustic cues (e.g., an interaural time difference (ITD) and/or an interaural level difference (ILD)) can result from the filtering of the sound signals by the listener's head, torso, and pinnae. This filtering behavior can be described in terms of a user's head-related transfer function (HRTF). Applying an HRTF to a 3D audio signal provides the user with the spatial cues necessary for reproducing spatial audio over headphones worn in, on and/or near the user's ear.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. In some embodiments, for example, an audio device (e.g., a headset) configured to be worn on a user's head includes a transducer carried by the audio device that is configured to be disposed at a location proximate the user's head and spaced apart from an ear of the user when the audio device is worn on the user's head. The audio device can further include electronics communicatively coupled to the transducer and configured to apply both a head related transfer function (HRTF) and a transducer position compensation filter to an audio signal to provide sounds having an enhanced frequency response at an entrance to the user's ear when the sounds are transmitted from the transducer toward the user's ear.
The present disclosure describes various devices, systems, and methods of transmitting and/or delivering audio information to a user's ear. An audio signal having a user's head related transfer function (hereinafter HRTF) applied thereto can provide a realistic spatial listening experience when played back over headphones and/or earphones positioned on and/or immediately adjacent the entrance of a user's auditory canal. Playback of audio signals via transducers that are not immediately adjacent the entrance of the user's ear canal (e.g., transducers positioned between about 4 cm and 10 cm from the entrance of the user's ear canal) can result in a significant decrease in audio quality and realism. Reflections caused by physical structures of the user's ear can create distortions in the audio signal. The inventors have recognized that applying a transducer position compensation filter to an audio signal having a user's HRTF applied thereto can mitigate spectral coloring introduced by the off-center position of a transducer relative to the entrance of the user's ear canal.
In some embodiments, a method of delivering audio information to a user's ear includes receiving an audio signal (e.g., a spatial audio signal, a single-channel audio signal, a multichannel audio signal). The method further includes generating a filtered audio signal by applying a filter to the audio signal and transmitting the filtered audio signal toward the user's ear from a transducer carried by a headset configured to be worn on the user's head. The transducer, when the headset is worn on the user's head, is configured to be positioned at a location that is longitudinally spaced apart a distance (e.g., between about 2 cm and about 12 cm, between about 4 cm and about 10 cm, between about 6 cm and 8 cm and/or approximately one-half the distance between the user's ear and the user's eye on the same side of the user's head) from an entrance of an auditory canal of the user's ear. Applying the filter comprises altering a portion of the audio signal at a range of frequencies (e.g., between about 1 kilohertz (kHz) and about 10 kHz). The filtered audio signal is configured to provide sounds having a frequency spectrum that is substantially similar to a frequency spectrum of sounds emitted from a transducer positioned at the entrance of the ear canal. In some aspects, the method includes detecting the orientation and/or the distance (e.g., between about 4 cm and about 10 cm) between the transducer and the entrance of the user's auditory canal. In some aspects, the transducer is carried by a headband of the headset and is configured to move along a groove on the underside of the headband such that the transducer is moveable between at least a first position and a second position relative to the entrance of the user's auditory canal. In these aspects, the method also includes further comprising modifying the filter when the transducer is moved along the groove from the first position toward the second position. In some aspects, the method includes and generating a modified audio signal by applying a user's HRTF to the audio signal. In some aspects, the method also includes detecting one or more anthropometrics of the user (e.g., head width or head depth), matching one or more anthropometric features of the user with one or more HRTFs in an HRTF database and adjusting the filter based on the one or more HRTFs matched to the one or more anthropometrics of the user. In some aspects, the method further includes using anthropometric data to construct and/or adjust the filter applied to the modified audio signal.
In some embodiments, a device (e.g., a spatial audio playback device, a headset, an augmented reality or virtual reality device) includes a headset configured to be worn on a user's head and a transducer carried by the headset. The transducer is configured to be spaced apart a distance from an ear of the user when the headset is worn on the user's head. A memory is configured to store executable instructions; and a processor is configured to execute instructions stored on the memory. The instructions include instructions for providing an audio signal having a frequency spectrum that is substantially similar to a frequency spectrum of sounds emitted from a transducer positioned at an entrance to the user's ear. In some aspects, the distance is equal to about half a distance between the ear and an eye of the user on the same side of the user's head. In some aspects, the distance is between about one-half and one-fourth of a wavelength of sound at 1 kHz. In some aspects, the distance is between about 4 cm and about 10 cm. In some aspects, the transducer is configured to move along a circumference of the headset from a first position toward a second position relative to the user's ear. In some aspects, a sensor configured to provide signals indicative of movement of the transducer along the headset to the processor. In some aspects, the headset comprises a first headband portion opposite a second headband portion. In some aspects, the first headband portion and the second headband portions are adjustable between a first configuration and at least a second configuration. In these aspects, the instructions for providing the audio signal include instructions for applying a head related transfer function (HRTF) to the audio signal, and the instructions further include instructions for modifying the HRTF when the first headband portion and the second headband portion are adjusted from the first configuration toward the second configuration.
In some embodiments, a system (e.g., an augmented reality system) includes an augmented reality device (e.g., a headset) configured to be worn on a user's head and a transducer carried by the augmented reality device. The transducer is configured to be disposed at a location proximate the user's head and spaced apart from an ear of the user when the augmented reality device is worn on the user's head. The system further includes electronics (e.g., system electronics comprising a memory and a processor) communicatively coupled to the transducer and configured to apply both a head related transfer function (HRTF) and a transducer position compensation filter to an audio signal to provide sounds transmitted from the transducer toward the user's ear having a frequency response at an entrance of the user's ear substantially similar to a frequency response of sounds transmitted from a transducer positioned at the entrance of the user's ear. In some aspects, the transducer is positioned on the augmented reality device such a distance between the transducer and the entrance of the user's ear is between about 4 cm and about 10 cm. In some aspects, the system further includes a first sensor configured to produce a first electrical signal indicative of an anthropometric feature of the user and a second sensor configured to produce a second electrical signal indicative of a distance between the transducer and the entrance of the user's ear. In these aspects, the electronics are further configured to adjust the HRTF based on the first electrical signal and to adjust the transducer position compensation filter based on the second electrical signal.
These and other aspects of the disclosed technology are described in greater detail below. Certain details are set forth in the following description and in
In the Figures, identical reference numbers identify identical, or at least generally similar, elements. To facilitate the discussion of any particular element, the most significant digit or digits of any reference number refers to the Figure in which that element is first introduced. For example, element 110 is first introduced and discussed with reference to
Suitable Device
As discussed in further detail below, the device 110 and the transducer 120a can be configured to receive an audio signal, apply an HRTF to the signal and further apply a transducer position compensation filter to the signal to deliver spatial audio to the entrance of the ear 105 having enhanced perceptual qualities (e.g., a relatively unmodified frequency response) compared to unfiltered spatial sounds (e.g., spatial sounds not having a transducer position compensation filter applied thereto) thereby providing a more realistic spatial audio experience.
Suitable System
Computer-implemented instructions, data structures, screen displays, and other data under aspects of the technology may be stored or distributed on computer-readable storage media, including magnetically or optically readable computer disks, as microcode on semiconductor memory, nanotechnology memory, organic or optical memory, or other portable and/or non-transitory data storage media. In some embodiments, aspects of the technology may be distributed over the Internet or over other networks (e.g. a Bluetooth network) on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave) over a period of time, or may be provided on any analog or digital network (packet switched, circuit switched, or other scheme).
The system electronics 203 includes several components including memory 203a (e.g., one or more computer readable storage modules, components, devices), one or more processors 203b, communication components 203c (e.g., a wired communication link and/or a wireless communication link (e.g., Bluetooth, Wi-Fi, infrared and/or another wireless radio transmission network)) and a database 203d configured to store to data (e.g., equations, filters, an HRTF database) used in the generation of spatial audio. In some embodiments, the system electronics 203 may include additional components not shown in
The device 210 is coupled to the system electronics 203 and includes a visual output (e.g., the display 116 of
The process 300 begins at block 310. At block 320, the process 300 receives one or more audio signals (e.g., spatial audio signals) from an external audio source (e.g., a media player, a mobile device, a computer, one or more remote servers) via a wired or wireless communication link (e.g., the communication component 203c and/or 223 of
At block 330, the process 300 applies a first filter to the received audio signal to generate a modified audio signal that incorporates filtering effects of physical structures of the user's body. The first filter can include, for example, an HRTF, a corresponding HRIR (head-related impulse response), and/or another suitable anatomical transfer function. In some embodiments, the first filter comprises a user's HRTF, which may be stored for example, on the memory 203a and/or in the database 203d (
At block 340, the process 300 applies a second filter such as a transducer position compensation filter to the modified audio signal generated at block 330. As described in more detail below with reference to
At block 350, the filtered audio signal is output to one or more transducers (e.g., the transducer 120a and/or 120b of
The process 400 begins at block 410. At block 420, the process 400 optionally determines a distance, orientation and/or direction (e.g., the distance D of
At block 430, the process 400 can optionally receive anthropometric data (e.g., measurements of one or more user anthropometrics such as head shape, head size, ear position, ear shape and/or ear size) and/or other measurement data from sensors on the headset (e.g., the sensors 222 of
At block 440, the process 400 generates a transducer position compensation filter to be applied to an audio signal such that the audio signal produces sounds having an enhanced frequency response at the user's ear compared to the audio signal of block 330 when the filtered audio signal is transmitted from a transducer positioned near the user's ear (e.g., between about 4 cm and about 100 cm from the user's ear) toward the user's ear. In some embodiments, as discussed below with reference to
HRTF Determination
As discussed above with reference to
At block 702, the process 700 instructs the user to assume a certain position or posture. For example, the process 700 instructs the user to look to the left. At block 704, the process 700 collects data with the user in that position. At block 706, the process 700 determines whether the data is valid. For example, if the process 200 was expecting data for a right ear, then the process 200 determines whether the data matches what is expected for a right ear. If not, the step(s) at block 702 may be repeated such that the user is again instructed to assume the correct posture. If the data is valid (block 706 is yes), then the process 700 determines whether there are more positions/postures for the user to assume. Over the next iterations the user might be asked to look straight ahead, look right, etc. Data could be collected for a wide variety of positions.
When suitable data is collected, the process 700 proceeds to block 710 to determine a HRTF for the user. In some embodiments, there is a library of HRTFs from which to select. These HRTFs may be associated to various physical characteristics of users. Examples include, but are not limited to, head size and width, pinna characteristics, body size. For example, a specific HRTF may be associated with specific measurements related to head size and pinna. The measurements might be a range or a single value. For example, one measurement might be head width, which could be expressed in terms of a single value or a range. The process 700 may then select an HRTF for the user by matching the user's physical characteristics to the physical characteristics associated with the HRTFs in the library. Any technique may be used to determine a best match. In some embodiments, the process 700 interpolates to determine the HRTF for the user. For example, the user's measurements may be between the measurements for two HRTFs, in which case the HRTF for the user may be determined by interpolating the parameters for the two HRTFs.
Next, the process 700 may perform additional steps to verify that this HRTF determination is good, and perhaps select a better HRTF for this user. At block 712, the system plays an audio signal for the user. This may be played through a headset worn by the user (e.g., the device 110 of
At block 718, the process 700 determines the effectiveness of the HRTF. For example, the process 700 determines how accurately the user was able to locate the virtual sounds. The system then determines whether a different HRTF should be determined for this user. If so, the new HRTF is determined by returning to block 710. The process 700 may repeat block 712-718 until a satisfactory HRTF is determined.
At block 722, the process 700 stores the user's HRTF. Note that this is not necessarily the last HRTF that was tested in process 700. That is, the process 700 may determine that one of the HRTFs that was tested earlier in the process 700 might be superior. Also note that more than one HRTF could be stored for a given user. For example, process 700 could be repeated for the user wearing glasses and not wearing glasses, with one HRTF stored for each case.
As noted, the process of determining detailed characteristics of the user such that an HRTF may be stored for the user might be done infrequently—perhaps only once.
At block 804, the process 800 selects a suitable HRTF for the user identified at block 802. In one embodiment, an HRTF that was stored for the user by the process 700 is selected. In another embodiment, the process 800 may select the HRTF based on user characteristics collected by the process 700. If desired, these stored detailed user characteristics may be augmented by information that is presently collected. For example, the process 800 may select a different HRTFs based on whether the user is wearing, for example, a hat and/or glasses.
From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
This is a continuation application of U.S. application Ser. No. 14/720,688, filed on May 22, 2015, entitled “SYSTEMS AND METHODS FOR AUDIO CREATION AND DELIVERY,” which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4731848 | Kendall et al. | Mar 1988 | A |
4864619 | Spates | Sep 1989 | A |
5438698 | Burton | Aug 1995 | A |
5587936 | Levitt et al. | Dec 1996 | A |
5680465 | Boyden | Oct 1997 | A |
5715323 | Walker | Feb 1998 | A |
5881390 | Young | Mar 1999 | A |
6118875 | Møller et al. | Sep 2000 | A |
6144747 | Scofield et al. | Nov 2000 | A |
6243476 | Gardner | Jun 2001 | B1 |
6427018 | Keliiliki | Jul 2002 | B1 |
6631196 | Taenzer et al. | Oct 2003 | B1 |
6631197 | Taenzer | Oct 2003 | B1 |
RE38351 | Iseberg et al. | Dec 2003 | E |
6990205 | Chen | Jan 2006 | B1 |
6996244 | Slaney et al. | Feb 2006 | B1 |
8270616 | Slamka et al. | Sep 2012 | B2 |
8428269 | Brungart et al. | Apr 2013 | B1 |
8545013 | Hwang | Oct 2013 | B2 |
8693703 | Rung | Apr 2014 | B2 |
8750541 | Dong et al. | Jun 2014 | B1 |
8767968 | Flaks et al. | Jul 2014 | B2 |
8768496 | Katz et al. | Jul 2014 | B2 |
8787584 | Nystrom et al. | Jul 2014 | B2 |
20030007648 | Currell | Jan 2003 | A1 |
20030044002 | Yeager | Mar 2003 | A1 |
20030059078 | Downs, Jr. | Mar 2003 | A1 |
20030138107 | Jin et al. | Jul 2003 | A1 |
20040091119 | Duraiswami | May 2004 | A1 |
20040136538 | Cohen | Jul 2004 | A1 |
20060056638 | Schobben | Mar 2006 | A1 |
20060056639 | Ballas | Mar 2006 | A1 |
20060240946 | Wakabayashi | Oct 2006 | A1 |
20070149905 | Hanna | Jun 2007 | A1 |
20070195963 | Ko et al. | Aug 2007 | A1 |
20070253587 | Ostrowski | Nov 2007 | A1 |
20080019554 | Krywko | Jan 2008 | A1 |
20080044052 | Whipple | Feb 2008 | A1 |
20080107287 | Beard | May 2008 | A1 |
20080199035 | Flechel et al. | Aug 2008 | A1 |
20090046864 | Mahabub et al. | Feb 2009 | A1 |
20090238371 | Rumsey et al. | Sep 2009 | A1 |
20100008528 | Isvan | Jan 2010 | A1 |
20100061580 | Tiscareno et al. | Mar 2010 | A1 |
20110200215 | Apfel | Aug 2011 | A1 |
20110222700 | Bhandari | Sep 2011 | A1 |
20110251489 | Zhang | Oct 2011 | A1 |
20120083717 | Alleman | Apr 2012 | A1 |
20120155689 | Milodzikowski et al. | Jun 2012 | A1 |
20120237041 | Pohle | Sep 2012 | A1 |
20120328107 | Nystrom et al. | Dec 2012 | A1 |
20130022214 | Dickins et al. | Jan 2013 | A1 |
20130089225 | Tsai | Apr 2013 | A1 |
20130169767 | Kim | Jul 2013 | A1 |
20130169878 | Kim | Jul 2013 | A1 |
20130177166 | Agevik et al. | Jul 2013 | A1 |
20130178967 | Mentz | Jul 2013 | A1 |
20130194107 | Nagata | Aug 2013 | A1 |
20130208900 | Vincent et al. | Aug 2013 | A1 |
20130259243 | Herre et al. | Oct 2013 | A1 |
20130272546 | Besgen, Sr. | Oct 2013 | A1 |
20140079212 | Sako | Mar 2014 | A1 |
20140123008 | Goldstein | May 2014 | A1 |
20140133658 | Mentz et al. | May 2014 | A1 |
20140159995 | Adams et al. | Jun 2014 | A1 |
20140221779 | Schoonover | Aug 2014 | A1 |
20140241540 | Hodges et al. | Aug 2014 | A1 |
20140321661 | Alao | Oct 2014 | A1 |
20140334626 | Lee et al. | Nov 2014 | A1 |
20140355792 | Nabata et al. | Dec 2014 | A1 |
20140369519 | Leschka et al. | Dec 2014 | A1 |
20150036864 | Ozasa et al. | Feb 2015 | A1 |
20150150753 | Racette | Jun 2015 | A1 |
20150156579 | Lowry | Jun 2015 | A1 |
20150293655 | Tan | Oct 2015 | A1 |
20150304761 | Montazemi et al. | Oct 2015 | A1 |
20150312694 | Bilinski | Oct 2015 | A1 |
20150326987 | Marrin | Nov 2015 | A1 |
20160269849 | Riggs | Sep 2016 | A1 |
20160338636 | IDrees | Nov 2016 | A1 |
20170208413 | Bilinski | Jul 2017 | A1 |
Number | Date | Country |
---|---|---|
2611216 | Jul 2013 | EP |
2013111038 | Aug 2013 | WO |
Entry |
---|
Thomas R. P. Mark, “Application of Measured Directivity Patterns to Acoustic Array Processing”, Retrieved From «http://www.aes-media.org/sections/uk/meetings/AESUK_lecture_1405.pdf», Jan. 2011, 48 Pages. |
“Non-Negative Matrix Factorization”, Retrieved from «https://en.wikipedia.org/wiki/Non-negative_matrix_factorization», Mar. 26, 2014, 11 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/720,688”, dated Jul. 28, 2016, 13 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 14/720,688”, dated Nov. 15, 2016, 9 Pages. |
Ahrens, et al., “HRTF Magnitude Modeling Using a Non-Regularized Least-Squares Fit of Spherical Harmonics Coefficients on Incomplete Data”, In Signal & Information Processing Association Annual Summit and Conference, Dec. 3, 2012, 5 Pages. |
Algazi, et al., “The CIPIC HRTF Database”, In Proceedings of IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics, Oct. 21, 2001, pp. 99-102. |
Andreopoulou, Areti, “Head-Related Transfer Function Database Matching Based on Sparse Impulse Response Measurements”, In Doctoral Dissertation, New York University, Jan. 2013, 239 Pages. |
Bilinski, Piotr, “HRTF Personalization using Anthropometric Features”, Retrieved from «https://web.archive.org/web/20150921053235/http://research.microsoft.com/apps/video/dl.aspx?id=201707», Sep. 27, 2013, 1 Page. |
Bosun, et al., “Head-Related Transfer Function Database and its Analyses”, In Proceedings of Science in China Series G: Physics, Mechanics & Astronomy, vol. 50, Issue 3, Jun. 2007, 14 Pages. |
Cheng, et al., “Introduction to Head-Related Transfer Functions (HRTFs): Representations of HRTFs in Time, Frequency, and Space”, In Journal of the Audio Engineering Society, vol. 49, Issue 4, Apr. 2001, pp. 231-249. |
Donoho, David L., “For Most Large Underdetermined Systems of Linear Equations of Minimal 11-Norm Solution is also the Sparsest Solution”, In Technical Report No. 2004-9, Jul. 2004, 30 Pages. |
Duda, et al., “Range Dependence of the Response of a Spherical Head Model”, In Journal of Acoustical Society of America,vol. 104, Issue 5, Nov. 1998, pp. 3048-3058. |
Fink et al., “Tuning Principal Component Weights to Individualize HRTFS”, In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 2012, 4 Pages. |
Gamper, Hannes, “Head-Related Transfer Function Interpolation in Azimuth, Elevation, and Distance”, In Journal of Acoustical Society of America, vol. 134, Issue 6, Dec. 2013, pp. EL547-EL553. |
Grindlay, et al., “A Multilinear Approach to HRF Personalization”, In Proceedings of 32nd International Conference on Acoustics, Speech, and Signal Processing, Apr. 2007, 4 Pages. |
Haraszy, et al., “Improved Head Related Transfer Function Generation and Testing for Acoustic Virtual Reality Development”, In Proceedings of the 14th WSEAS International Conference on Systems: Part of the 14th WSEAS CSCC Multi Conference, vol. 2, Jul. 2010, 6 Pages. |
Hastie, et al., “The Elements of Statistical Learning Data Mining, Inference, and Prediction”, In Springer—Statistics Series, Second Edition, Sep. 15, 2009, 764 Pages. |
Hertsens, Tyll, “Headphone Measurement Procedures—Frequency Response”, Retrieved From «https://web.archive.org/web/20110430001913/http://www.innerfidelity.com/content/headphone-measurement-proceedures-frequency-response», Apr. 15, 2011, 11 Pages. |
Hoerl, et al., “Ridge Regression Biased Estimation for Nonorthogonal Problems”, In Journal of Technometrics, vol. 42, Issue 01, Feb. 2000, 7 Pages. |
Hu, et al., “HRTF Personalization Based on Artificial Neural Network in Individual Virtual Auditory Space”, In the Proceedings of the Journal of Applied Acoustics, vol. 69, Issue 2, Feb. 2009, pp. 163-172. |
Huang, et al., “Sparse Representation for Signal Classification”, In Proceedings of Twenty-First Annual Conference on Neural Information Processing Systems, Dec. 2007, 8 Pages. |
Jenison, “Synthesis of Virtual Motion in 3D Auditory Space”, In Proceedings of IEEE 20th Annual International Conference of the Engineering in Medicine and Biology Society, vol. 3, Oct. 29, 1998, pp. 1-5. |
Jot, Jean-Marc, “Efficient Models for Reverberation and Distance Rendering in Computer Music and Virtual Audio Reality”, In Proceedings of International Computer Music Conference, Sep. 1997, 8 Pages. |
Kohavi, Ron, “A Study of Cross-Validation and Bootstrap for Accuracy Estimaton and Model Selection”, In Proceedings of the 14th International Joint Conference on Artificial Intelligence, vol. 2, Aug. 1995, 7 Pages. |
Kukreja, et al., “A Least Absolute Shrinkage and Selection Operator (Lasso) for Nonlinear System Identification”, In IFAC Proceedings, vols. 39, Jan. 1, 2006, 6 Pages. |
Lemaire, et al., “Individualized HRTFs From Few Measurements: a Statistical Learning Approach”, In Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN), Jul. 31, 2005, pp. 2041-2046. |
Li, et al. “HRTF Personalization Modeling Based on RBF Neural Network”, In Proceedings of International Conference on Acoustics, Speech and Signal Proceeding, May 2013, 4 Pages. |
Luo, et al., “Gaussian Process Data Fusion for the Heterogeneous HRTF Datasets”, In Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 20, 2013, 4 Pages. |
Masiero, et al., “Perceptually Robust Headphone Equalization for Binaural Reproduction”, Presented at the 130th Convention of Audio Engineering Society, May 13, 2011, 7 Pages. |
Mohan, et al., “Using Computer Vision to Generate Customized Spatial Audio”, In Proceedings of the International Conference on Multimedia and Expo, vol. 3, Jul. 6, 2003, 4 Pages. |
Otani, et al., “Numerical Study on Source-Distance Dependency of Head-Related Transfer Functions”, In Journal Acoustical Society of America, vol. 125, Issue 5, May, 2009, pp. 3253-3261. |
“International Search Report and Written Opinion Issued in PCT Application No. PCT/US2016/029275”, dated Aug. 8, 2016, 16 Pages. |
Rothbucher, et al., “Measuring Anthropometric Data for HRTF Personalization”, In Sixth International Conference on Signal-Image Technology and Internet Based Systems, Dec. 15, 2010, 5 Pages. |
Schonstein, et al., “HRTF Selection for Binaural Synthesis from a Database Using Morphological Parameters”, In Proceedings of 20th International Congress on Acoustics, Aug. 23, 2010, 6 Pages. |
Simonite, Tom, “Microsoft's “3-D Audio” Gives Virtual Objects a Voice”, In MIT Technology Review, Jun. 4, 2014, 2 Pages. |
Spagnol, et al., “On the Relation Between Pinna Reflection Patterns and Head-Related Transfer Function Features”, In Proceedings of IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, Issue 3, Mar. 2013, pp. 508-519. |
Spors, et al., “Interpolation and Range Extrapolation of Head-Related Transfer Functions Using Virtual Local Nave Field Synthesis”, In 130th Convention of Audio Engineering Society, May 13, 2011, 16 Pages. |
Wagner, et al., “Towards a Practical Face Recognition System: Robust Alignment and Illumination by Sparse Representation”, In Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, Issue 2, Feb. 2012, 15 Pages. |
Wahab, et al., “Improved Method for Individualization of Head-Related Transfer Functions on Horizontal Plane Using Reduced Number of Anthropometric Measurements”, In Journal of Telecommunications, vol. 2, Issue 2, May 27, 2010, 11 Pages. |
Weissgerber, et al., “Headphone Reproduction Via Loudspeakers Using Inverse HRT-Filters”, In Proceedings of NAG/DAGA, Jan. 2009, 4 Pages. |
Wright, et al., “Robust Face Recognition via Sparse Representation”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, Issue 2, Feb. 2009, pp. 210-227. |
Zotkin, et al., “HRTF Personalization Using Anthropometric Measurements”, In the Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 19, 2003, 4 Pages. |
Number | Date | Country | |
---|---|---|---|
20170156017 A1 | Jun 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14720688 | May 2015 | US |
Child | 15428965 | US |