This invention relates generally to aircraft and more particularly to providing audio feedback regarding the operation of an aircraft.
Aircraft have seen enormous advances in technology over the last century. For example, in just the recent past, aircraft engines, pumps, and other actuators have become quieter, autopilots have become smoother, and automation has taken a greater role in aircraft control. But, these technological advances have also resulted in pilots becoming increasingly removed from the direct control of the aircraft. Further, these advances have resulted in pilots having less direct feedback about the operation of the aircraft systems and flight control actions.
An example of less feedback is the throttle lever on the Airbus A320 aircraft, which remains in a fixed position while the autothrottle system is issuing throttle commands to the engines. Thus, the only indication the pilots have of the actions of the autothrottle system is the movement of the N1 engine indicator, which shows the turbine engine rotation speed.
Further, noise from air flow over the cockpit prevents the crew from hearing the engines, and the autopilot and autothrottle systems are smooth enough, so that it is often difficult for the pilot to detect aircraft maneuvers.
Without a system that gives better feedback to the pilots, all of the above factors can combine to cause pilots to lose track of the operation of the aircraft's automated systems with potentially disastrous results.
The present invention provides solutions to the above-described shortcomings in conventional approaches, as well as other advantages apparent from the description below.
The present invention is a method, system, and apparatus for providing audio feedback regarding the operation of an aircraft. In one aspect, microphones are placed next to sound sources, which could be components of the aircraft. Audio inputs are received from the microphones and analyzed based on a psycho-acoustic model to provide settings, such as level, pan, and equalization, to an automatic mixer. The automatic mixer mixes the sounds based on the settings and provides audio output to the pilot of the aircraft. The pilot can then use the audio output to more effectively monitor the operations of the aircraft components, which might otherwise be difficult or impossible to hear.
In the following detailed description of exemplary embodiments of the invention, reference is made to the accompanying drawings (where like numbers represent like elements) that form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, but other embodiments may be utilized and logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
The present invention is a method, system, and apparatus, for providing audio feedback regarding the operation of an aircraft. In one aspect, microphones are placed next to sound sources, which could be components of the aircraft. Audio inputs are received from the microphones and analyzed based on a psycho-acoustic model to provide settings, such as level, pan, and equalization, to an automatic mixer. The automatic mixer mixes the sounds based on the settings and provides audio output to the pilot of the aircraft via a speaker or headphones. The purpose of the mixing functions, either automatic or manual, is to balance all of the auditory inputs, so that the pilot is able to acoustically monitor the operation of all of the sound sources simultaneously, which might otherwise be difficult or impossible to hear.
Airframe 105 is that portion of aircraft 100 to which other aircraft components are affixed, either directly or indirectly. For example, wings 110 of aircraft 100 are affixed directly to airframe 105, but flaps 115 are affixed directly to wings 110 and indirectly to airframe 105 through wings 110.
The configuration depicted in
Aircraft 100 contains airframe 105 to which aircraft components are affixed, either directly or indirectly, and audio feedback system 242. Aircraft components include engines 120 (one or many), flaps 115, brakes 215, gear 220, pumps 225, and cockpit 240. Air rushing past airframe 105 produces airframe noise 235.
Audio feedback system 242 includes microphones, such as microphones 245 and 250, adjacent to the various aircraft components. Audio feedback system 242 also includes cancellation function 255, frequency and amplitude analysis system 260, psycho-acoustic model 261, automatic mixer 265, speakers 270, headsets 275, level, pan, and equalization controls 280, manual mixer 285, and display 290.
The microphones, such as left-channel microphone 245 and right-channel microphone 250, are placed near the various aircraft components in order to feed audio input signals to frequency and amplitude analysis system 260. In this example, right and left-channel microphones are illustrated for each aircraft component except for airframe noise 235 coming from airframe 105 and cockpit 240, both of which only have one microphone. But, any number of microphones per aircraft component could be used.
Analysis system 260 determines how the various audio inputs from the microphones can be best balanced so the pilot can clearly distinguish each one independently. Analysis system 260 uses psycho-acoustic model of human auditory perception 261 to predict which signals will be inaudible due to masking.
This prediction shares some similarities with the MP3 (MPEG Audio Layer-3) music compression algorithm, which analyzes the spectral content of musical signals and, based on the combinations of closely located frequencies and relative levels, determines which sounds are most likely to be masked by others. MPEG is an acronym for Moving Picture Experts Group, a working group of ISO (International Organization for Standardization). MPEG also refers to the family of digital compression standards and file formats developed by the group.
The MP3 algorithm does its analysis using a psycho-acoustic model of how sensitive the human ear is to sounds across the frequency spectrum, how close in frequency content two competing sounds are, and whether the level differences would cause the louder sound to mask the quieter one.
But, while the MP3 algorithm uses its psycho-acoustic model to discard content that it predicts to be imperceptible, analysis system 260 instead uses psycho-acoustic model 261 to identify audio signals that the pilot wouldn't hear in the present aural environment and adjust the relative levels, the spatial localization (left/right pan), and equalization of the competing signals to ensure that all the signals surpass the masking threshold. Analysis system 260 has an iterative process to reduce the level of louder signals, enhance the level of quieter signals, apply equalization to remove redundant signals in frequency ranges that compete with other signals, and pan signals to unique positions in the aural field, so the ears can localize them. The result of this process is recommended settings of level, pan, and equalization that will balance the signals to ensure that each one will be clearly audible in the presence of the others.
The level setting adjusts the volume level of the sound signal.
The pan setting adjusts apparent spatial localization of the left and right channels by adjusting level, phase, and reverberation. If a sound is emanating from the left, the left ear hears more of the direct sound than the right ear, and hears the direct sound slightly earlier than the right ear. The brain uses this difference in phase, based on the time the signal reaches each ear, to determine spatial localization. The brain also uses the higher level of direct sound perceived by the left ear and the higher proportion of reflected sound perceived by the right ear to determine spatial localization. The pan function adjusts signal levels, phase, and reverberation to emulate the acoustic properties of natural sounds, in order to localize the sound for the pilot.
The equalization setting further separates out the sound inputs in the frequency domain by selectively boosting and dampening certain frequencies. For example, the engine sounds are likely to have a low fundamental frequency and a broad spectrum, which would mask out many other sounds. But, the pilot still needs to hear the engines in order to perceive the increasing or decreasing engine thrust and to hear potentially hazardous engine vibration. Equalization dampens out the portion of engine sounds that would mask other sounds while still keeping the engine sounds that impart information about thrust and vibration. For example, engine sounds near 200 Hz are dampened because they would likely mask out sounds from other components, such as the pumps.
Analysis system 260 then provides these recommended settings to automatic mixer 265, manual mixer 285, and display 290.
Psycho-acoustic model 261 specifies a way to separate sounds from each other, and contains a list of what sound components are likely to be masked by others. Psycho-acoustic model 261 accounts for the properties that make up the sounds that we hear:
1) The audio stimulus;
2) The ear's physical capability to perceive the audio stimulus, that is, the ear's ability to distinguish frequency and amplitude and localize a sound in space in relationship to the two ears; and
3) The psychological aspects of sound perception. For example, certain sounds are easier to hear than others; certain sounds are fatiguing, especially monotonous sounds; and humans more readily perceive a changing sound over a constant sound.
Automatic mixer 265 adjusts the individual levels and pan functions and equalization based on the recommended settings from analysis system 260.
Display 290 has set of indicators that display the operations of analysis system 260, automatic mixer 265, and manual mixer 285. Display 290 shows visual indications of source inputs plus levels, panning, and equalization, as they are being applied from the automatic and manual mixers.
Besides displaying the recommended settings, display 290 also provides switching control that allows pilots to decide which of automatic mixer 265 and manual mixer 285 will drive the acoustic output (headsets 275 or speakers 270). This is because pilots may want to simply modify the settings suggested by frequency and amplitude analysis system 260 or completely bypass automatic mixer 265 and apply only manual settings via control 280. By obtaining information directly from analysis system 260 instead of from automatic mixer 265, pilots can return to the recommendations from analysis system 260 at any time (this allows pilots to recover from over-tweaking the input parameters, and finding that they simply can't balance the sounds the way they should be), or simply turn off manual mixer 285 and revert to automatic mixer 265.
Manual mixer 285 allows the pilot to override the functions of automatic mixing 265 by using level, pan, and equalization controls 280. A manual mixer typically has sliders that the user can move in order to control levels for each of the channels, but any appropriate manual mixer could be used. Although controls 280 are drawn as separate from display 290, they could be packaged together with controls 280 implemented as virtual controls on display 290, for example as virtual buttons or sliders on a touchscreen.
Speakers 270 and headsets 275 are alternative ways for the pilot to receive sound. Speakers 270 are ambient speakers while headsets or headphones 275 contain speakers next to one or both ears.
Cancellation functions 255 work by placing microphones in or near the headsets 275 and then monitoring sound coming into the microphones and constructing a sound waveform that is opposite, which reduces the incoming sound by several dB. Cancellation functions 255 use active noise cancellation technology.
Cancellation functions 255, frequency and amplitude analysis system 260, psycho-acoustic model 261, automatic mixer 265, and manual mixer 285 can be implemented using control circuitry though the use of logic gates, programmed logic devices, memory, or other hardware components. They could also use instructions executing on a computer processor.
Control then continues to block 315 where analysis system 260 detects the aircraft operations that do not have audible sound associated with them. There are a number of components and systems on an aircraft: engines, hydraulics, bleed air used for pressurization and gauges, control functions, electrical functions, and fuel transfer functions. Some of these components, such as the engines, produce sounds that a microphone can detect. But, others do not produce audible sound, such as switches and valves opening and closing, fuel moving from one side to another, and so forth. Yet, it still would be helpful to provide the pilot with audio feedback regarding the performance of these silent systems.
Control then continues to block 320 where analysis system 260 synthesizes sounds that correspond to the silent aircraft operations that were detected in block 315. Synthesized sounds are used to augment naturally occurring sounds with automatic indications of processes that would otherwise be silent.
Control then continues to block 325 where analysis system 260 determines masked signals based on the frequency and amplitude of the audio inputs and the psycho-acoustic model, as previously described above under the description for
Referring again to
By examining the frequency contents of all the sound sources, analysis system 260 determines which sound sources are good candidates for selective frequency damping, which are good candidates for selective frequency boosting, which are candidates for overall level adjustments only, and which ones, because they have similar fundamental frequencies but different harmonic content, are good candidates for being well separated by selective panning. Analysis system 260 then adjusts the relative levels, equalization, and pan settings to optimally bring all of the sound sources to the acoustic surface.
Control then continues to block 335 where analysis system 260 provides recommended settings of level, pan, and equalization to automatic mixer 265, manual mixer 285, and display 290 based on the unmasking strategy, as previously described above.
Referring again to
The present invention provides audio feedback regarding the operation of an aircraft to a pilot. Microphones are placed next to sound sources, which are components of the aircraft. Audio inputs are received from the microphones and analyzed based on a psycho-acoustic model to provide settings, such as level, pan, and equalization, to an automatic mixer. The automatic mixer mixes the sounds based on the settings and provides audio output to the pilot of the aircraft via a speaker or headphones. The purpose of the mixing functions, either automatic or manual, is to balance all of the auditory inputs, so that the pilot is able to acoustically monitor the operation of all of the sound sources simultaneously, which might otherwise be difficult or impossible to hear.
Number | Name | Date | Kind |
---|---|---|---|
2748372 | Bunds | May 1956 | A |
4538777 | Hall | Sep 1985 | A |
4831438 | Bellman et al. | May 1989 | A |
4941187 | Slater | Jul 1990 | A |
4952931 | Sergeldin et al. | Aug 1990 | A |
5228093 | Agnello | Jul 1993 | A |
5309379 | Rawlings | May 1994 | A |
5355416 | Sacks | Oct 1994 | A |
5406487 | Tanis | Apr 1995 | A |
5692702 | Andersson | Dec 1997 | A |
5798458 | Monroe | Aug 1998 | A |
5864820 | Case | Jan 1999 | A |
5894285 | Yee | Apr 1999 | A |
6012426 | Blommer | Jan 2000 | A |
6273371 | Testi | Aug 2001 | B1 |
6275590 | Prus | Aug 2001 | B1 |
6366311 | Monroe | Apr 2002 | B1 |
6453273 | Qian et al. | Sep 2002 | B1 |
6545601 | Monroe | Apr 2003 | B1 |
Number | Date | Country |
---|---|---|
3327076 | Jan 1985 | DE |
2256996 | Dec 1992 | GB |
2314542 | Jan 1998 | GB |