The disclosure relates to an audio system, and, more particularly, an audio system in a motor vehicle.
How music or other audio sounds to a listener in a passenger compartment or cabin of a motor vehicle depends on the exact location of the listener's head/ears within the cabin. However, there are no known audio systems that take ear location into consideration in audio settings.
The present invention may use a driver monitor camera to identify the location of a vehicle occupant's ears. The vehicle's sound system may then use that ear location information to customize sound settings, including equalization, phase, speaker directionality, and sound isolation.
In one embodiment, the invention comprises an audio arrangement for a motor vehicle, including a camera capturing images of a driver of the motor vehicle. A processor is communicatively coupled to the camera and determines, based on the captured images, locations of ears of the driver. An audio system is communicatively coupled to the processor and adjusts a parameter of an audio signal dependent upon the determined locations of the ears of the driver. A loudspeaker is communicatively coupled to the audio system and emits sounds based on the audio signal.
In another embodiment, the invention comprises an automotive audio method, including capturing images of an occupant of a passenger compartment of a motor vehicle while he is in the passenger compartment. Locations of ears of the occupant are determined based on the captured images. A parameter of an audio signal is adjusted dependent upon the determined locations of the ears of the occupant. Sounds are emitted based on the audio signal.
In yet another embodiment, the invention comprises an audio arrangement for a motor vehicle, including at least one camera capturing images of a plurality of occupants of the motor vehicle. A processor is communicatively coupled to the at least one camera and determines, based on the captured images, locations of ears of each of the occupants. An audio system is communicatively coupled to the processor and adjusts a parameter of an audio signal dependent upon the determined locations of the ears of the occupants. A loudspeaker is communicatively coupled to the audio system and emits sounds based on the audio signal.
An advantage of the present invention is that it may normalize audio quality across various ear box positions.
A better understanding of the present invention will be had upon reference to the following description in conjunction with the accompanying drawings.
Based upon the ear location data received from electronic system 14, smart audio system 16 may adjust sound settings, equalization, phase, etc., of the sound emitted by loudspeaker 18. Smart audio system 16 may make these adjustments in order to improve the sound quality, and/or to make the sound more aesthetically pleasing to the driver. As the driver moves his head and ears, the adjustments to audio signal parameters may be performed at least once per second in order to keep up with the changing locations of the driver's ears.
If there are multiple loudspeakers within the passenger compartment of the vehicle, the speakers may be located at spaced out locations with different orientations. Thus, the sound emitted by each of the loudspeakers may be adjusted differently based upon each speaker's location and orientation relative to the driver's ears. It is also possible for a same or similar adjustment to be made to the sound emitted by each speaker.
In another embodiment, one or more cameras capture images of all of the passengers in the vehicle, and the processor determines therefrom the ear positions of all of the passengers. The smart audio system may then adjust sound settings, equalization, phase, etc., of the sound emitted by one or more loudspeakers in order to optimize the sound quality for the passengers as a group, and/or to make the sound more aesthetically pleasing to the passengers as a group.
Next, in step 204, locations of ears of the occupant are determined, based on the captured images. For example, electronic system 14 may analyze the captured images and determine therefrom the locations of the ears of the occupant. If the occupant's ears are not visible in the images, then the locations of the ears may be estimated based on the locations of other facial features, such as the eyes, nose and mouth.
In a next step 206, a parameter of an audio signal is adjusted dependent upon the determined locations of the ears of the occupant. For example, based upon the ear location data received from electronic system 14, smart audio system 16 may adjust sound settings, equalization, phase, etc., of the audio signal produced by audio system 16.
In a final step 208, sounds are emitted based on the audio signal. For example, loudspeaker 18 may emit sounds based on the adjusted audio signal produced by audio system 16. The invention has been described as detecting the positions of a driver's ears. However, the invention may detect the positions of the ears of any occupant of the vehicle other than the driver. For example, in the case of an autonomously driven vehicle that has no human driver, the positions of the ears of any or all human passengers of the vehicle may be detected.
The invention has been described as determining the locations of the driver's ears in the captured images. However, it is within the scope of the invention to determine the locations of the driver's ears indirectly based on the location of some other feature of the driver's head, such as his eyes.
The foregoing description may refer to “motor vehicle”, “automobile”, “automotive”, or similar expressions. It is to be understood that these terms are not intended to limit the invention to any particular type of transportation vehicle. Rather, the invention may be applied to any type of transportation vehicle whether traveling by air, water, or ground, such as airplanes, boats, etc.
The foregoing detailed description is given primarily for clearness of understanding and no unnecessary limitations are to be understood therefrom for modifications can be made by those skilled in the art upon reading this disclosure and may be made without departing from the spirit of the invention.
This application claims benefit of U.S. Provisional Application No. 62/428,702 filed on Dec. 1, 2016, which the disclosure of which is hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62428702 | Dec 2016 | US |