The disclosure relates to data processing of millimeter wave radar data to generate a graphical user display.
Aircraft, including helicopters and other rotor craft, may need to operate in a degraded visual environment (DVE), which may be caused by darkness, dust, storm, sand, clouds, rain, blowing snow, mist, fog, or other factors. Some example situations where operating in a DVE may be necessary include rescue operations, medical evacuations (MEDEVAC), and military operations. In such situations, pilots may benefit from being able to identify obstacles such as cables, steep terrain, buildings, or other aircraft during flight or while landing. Systems installed in some aircraft may use a variety of sensors to detect and display hazards with varying degrees of success. For example, infra-red (IR) sensors have not been very successful when landing in a dusty environment subject to brown-out conditions (DVE, from blown dust).
The disclosure relates to data processing of millimeter wave radar data to generate a graphical user display for an aircraft display system.
In one example, an aircraft display system includes a plurality of sensors and one or more processors configured to receive a plurality of sensor inputs from the plurality of sensors; translate the plurality of sensor inputs into a signal; and output the signal for display at a display device operatively coupled to the one or more processors, wherein the signal output to the display device causes the display device to display a three-dimensional depiction of a region around an aircraft, wherein the three-dimensional depiction of the region around the aircraft comprises a volumetric representation, wherein the or more processors are further configured to identify hazards in the region around the aircraft and fix the volumetric representation to an aircraft coordinate location and an aircraft attitude.
In another example, a radar signal processing device includes one or more processors configured to receive a plurality of radar signal inputs from a plurality of radar receivers; translate the plurality of radar signal inputs into a display signal; and output the display signal to a display processing system operatively coupled to the processor and a display device, wherein the display signal causes the display device to display a three-dimensional depiction of a region around an aircraft, wherein the three-dimensional depiction of the region around the aircraft comprises a display cylinder to identify and prioritize hazards in a region around the aircraft.
In another example of the techniques of this disclosure, a method includes receiving from a plurality of sensors, a plurality of sensor inputs; translating the plurality of sensor inputs into a display signal; transmitting, to a display device, the display signal, wherein the display signal causes the display device to display a three-dimensional depiction of a region around an aircraft, wherein the three-dimensional depiction of the region around the aircraft comprises a volumetric representation, wherein the or more processors are further configured to identify hazards in the region around the aircraft and fix the volumetric representation to an aircraft coordinate location and an aircraft attitude.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
Techniques of this disclosure may enable an aircraft display system to depict a real-time display of a three-dimensional depiction of a region around an aircraft, where this three-dimensional depiction is fixed to the aircraft's coordinate location and attitude. As the aircraft moves in attitude (e.g., roll, pitch, or yaw), in altitude (e.g., climbing and descending), and/or laterally, the three-dimensional depiction may tilt and move with the aircraft. The depiction of the region around an aircraft may be depicted as a three-dimensional volumetric representation, such as a display cylinder, that identifies and prioritizes hazards in the region around the aircraft. The aircraft display system may combine data from a plurality of sensors into a composite, real-time, three-dimensional synthetic vision display that determines a priority for each hazard. The display may, for example, depict each hazard in a color code according to the priority for each hazard. This prioritized display that moves with movement of the aircraft may help to increase an aircraft operator's situational awareness.
Commercial and military aircraft, and even some ground vehicles, include various types of synthetic vision systems (SVS). Some SVSs may combine different technologies such as real-time sensors, previously gathered digital terrain data, and automated helicopter flight controls. Some systems may enable the pilot to “see” in a degraded visual environment (DVE) and guide the helicopter to a preset point or let the helicopter land itself while the pilot watches over the landing zone. Some examples of issues that pilots operating in DVE face include reduced peripheral vision from night vision goggles, brown-out or white-out conditions when landing in a dusty or snow environment, and hazards, such as vehicles, that move into a landing zone that may have previously seemed free of hazards. As one example, drifting in a dust cloud when close to touchdown may make helicopters prone to lateral rollover or ground collisions. In other words, a helicopter must normally touch down with no left-right movement. With any left-right movement, the skids or wheels may catch the ground, causing the helicopter to tilt over, resulting in the main rotors touching the ground or other obstacles. Such events may result in serious damage to the aircraft and/or injury to the aircraft operator.
Some systems that address these issues include laser-based landing systems that may provide to an aircraft operator information related to ground speed and drift, airspeed, altitude, wind speed, and direction on a cockpit hover display. Helicopter crews may learn to use cockpit hover symbology to make safe brownout landings and to make rolling landings to keep ahead of the dust cloud. Other systems include an “altitude hold hover stabilization system” capable of near-automatic landings. The system may fly the aircraft to a two- to three-foot hover over a programmed point, so the pilot can land safely. With precision hover symbology and enhanced flight controls, helicopter pilots may learn to interpret the symbology to operate in DVE, but such a system does not offer “see-through” capability for brownout (dust) or whiteout (snow) conditions.
Another technique for operating in a DVE may be to take a photograph of an area where the aircraft will be operating and overlay the photograph on a synthetic vision display (SVD) that may combine inputs from infra-red, electro-optical, radar and other sensors. The photograph may be taken weeks, days or seconds prior to the aircraft's expected arrival time at the location. Such a system may be called a “see-and-remember” photographic landing augmentation system. The crew “sees” the picture on display of the landing zone approach that may use a synthetic vision display. On the display, the photograph may be registered on the hidden landing zone along with altitude, height above ground, distance from landing zone, speed and heading symbology.
In one example, a “see-and-remember” system may use a high-resolution camera with other sensors such as an infrared strobe and laser rangefinder to image a landing zone seconds before the helicopter enters the brownout cloud. On the display, the image may be registered and overlaid on the hidden landing zone to create an approach display with altitude, height above ground, distance from landing zone, speed and heading symbology. The synthetic vision system may combine the photograph, along with input from other sensors into a display of the aircraft approaching the landing zone. See-and-remember systems may provide near real-time display using cues pilots can quickly interpret, but may not be able to detect small hazards or new hazards that appear, such as moving ground vehicles.
The techniques described in this disclosure may address some of the shortcomings of the systems described above. The techniques of this disclosure, used either individually or in conjunction with the systems described above, may improve the situational awareness for the aircraft operator. For example, during a landing approach, an aircraft with a see-and-remember system may show the approach display in near real-time. This near real-time approach display, combined with the real-time display cylinder fixed to the aircraft coordinate location in accordance with techniques of this disclosure, may provide the pilot significant situational awareness both of where the aircraft will be (in the approach display) and where the aircraft is now in real-time (from the display cylinder). A three-dimensional synthetic vision display cylinder, fixed to the aircraft, can depict each hazard, for example in a color code, according to the priority for each hazard with 360-degree coverage for survivability. In this way, the prioritized hazard display stays locked to movement of the aircraft, which may allow a pilot to quickly interpret nearby hazards and operate safely in a DVE.
In this example, a three-axis coordinate system 12A and 12B and have the origin at coordinate locations 16A and 16B, shown in this example as at or near the center of aircraft 14A and 14B, respectively. In other examples, coordinate locations 16A and 16B may be set at any point near aircraft 14A and 14B. Some examples include placing coordinate location 16A at the forward-most point of aircraft 14A, the top of the main rotor or centered on the skids where the skids would touch the ground.
A synthetic vision display system (not shown) inside the cockpit of helicopter 16A may present display cylinder 18A to the helicopter operator example three-dimensional depiction of the physical volumetric region of airspace 18A around example helicopter 14A. Helicopter 14B may be a depiction within the synthetic vision display of the physical helicopter 14A of
In this example,
SVD 20 may depict mountains 22A and other terrain on either side of a valley. The terrain information may be contained in a terrain database and be collected from plurality of sources that may be updated monthly, annually or at other intervals. For example, light detection and ranging (LIDAR) technology may map terrain by illuminating terrain with a laser and analyzing the reflected light. LIDAR mapping flights may be conducted over terrain as needed to update a terrain database.
SVD 26 depicts a synthetic vision approach display that may combine terrain information 22B with real-time or near real-time information from other sensors to show hazards 29 that may impact aircraft operation. For example, aircraft 14A may have a forward looking infra-red (FLIR) or a see-and-remember system, as described above. Unlike an infrared (IR) or electro-optical (EO) system, a system using millimeter wave radar (MMWR) as sensors may have the capability to penetrate the sand, dust, snow, rain, and other environmental hazards. The system may combine the terrain database information and sensor information and output a signal to the synthetic vision system as shown in SVD 26. In another example, an aircraft may be operating in a city with buildings, transmission towers and other hazards. Where SVDs 20 and 26 show mountains 22A and 22B, the same system operating in a city may show tall buildings, bridges and other hazards to, for example, a MEDEVAC helicopter taking a patient to a hospital. A terrain database that is updated every few months may include hazards such as buildings, factory chimneys, and electrical power transmission lines. However, the database may not include new construction or renovation where, for example, a tower crane may have been recently installed. An approach display, as shown in
In this example, the three-dimensional region 318 around the aircraft may be considered a volumetric representation of airspace. The volumetric representation may be considered a cylinder, cube, or any other three dimensional shape. Each point in the volumetric representation may be mapped by in relation to a central point, such as coordinate location 316. This is similar to coordinate location 16 from
In other examples, each sub-cylinder position may be designated by spherical coordinates. A system using spherical coordinates may designate each sub-cylinder location as a distance and angle from aircraft coordinate location 316, such as radial, azimuth, and polar. The radial may be the distance from aircraft coordinate location 316 and the azimuth and polar may be the horizontal and vertical angles. So, for example sub-cylinder AAAA may be designated as (Ra, θa, φa) and sub-cylinder AABB as (Rb, θb, φb).
Each sub-cylinder may be further mapped to a three-dimensional memory matrix shown in
As shown in
Techniques to implement a three-dimensional depiction of a region around an aircraft may include the display cylinder described above as well as other techniques to depict a volumetric representation of airspace on a computing system. Some examples include a rectangular cube shape made up of a plurality of sub-cubes or any other three-dimensional shape. Also, computing system may implement display of the region around an aircraft by using the three-axis or spherical coordinate systems described above, as well as any other viable technique to designate locations in three-dimensional space. Any similar volumetric representation of the airspace may be mapped to a three-dimensional memory location within a processor. The processor may perform calculations and functions using the three dimensional matrix.
Synthetic vision processing system 110 is configured to receive inputs from aircraft platform navigation systems 112, terrain database 118, 3D radar signal processing device 120, and other sensors 114. Synthetic vision processing system 110 is further configured to transmit display signal 128 to display 116, which is operatively coupled to synthetic vision processing system 110. Synthetic vision processing system 110 may receive display control signal(s) 129 in response to input from an operator, such as a flight crew member, used to control the size, contrast or other features of display 116.
Radar signal processing device 120 may transmit radar image signal 126 to synthetic vision processing system 110 and receive radar control signal 127. The platform navigation system 112 may, for example, include a global positioning system (GPS), gyroscope and accelerometer based instruments that provide aircraft attitude, direction and position information, along with other navigation systems such as VHF omnidirectional range (VOR) systems. Platform navigation system 112 may, for example, include systems such as electronic warfare (EW), weapon systems and command, control, communications, computers, intelligence, surveillance and reconnaissance systems. Other sensors 114 may, for example, include forward looking infra-red (FLIR), laser range finders, traffic collision avoidance systems (TCAS), and similar devices.
Radar signal processor 120 may receive inputs from a suite of radar receivers 122A-122N and 125. In the example of
Radar signal processing device 120 may correlate details of the 3D data according to a 3D correlation algorithm to provide 3D radar image signal 126. Radar signal processor 124 may use radar data fusion and 3D processing from the shorter range MMWR receivers 122A 122N to provide the 360° volumetric coverage for the helicopter or other rotary wing aircraft. While the example of
Synthetic vision processing system 110 may receive 3D radar image signal 126 and combine it with the signals from terrain database 118, platform navigation 112 and other sensors 114. Synthetic vision processing system 110 may translate the various signals into a display signal 128 for output to display 116. Display signal 128 may cause display 116 to display a three-dimensional depiction of a region around the aircraft, as described above in
As discussed above, display cylinder 18B stays fixed to the aircraft attitude and coordinate location 16B. Synthetic vision processing system 110 may rotate and tilt display cylinder 18B to match the attitude of the aircraft based on signals from instruments using gyros and accelerometers that detect aircraft roll, pitch and yaw. These roll, pitch and yaw signals may come from platform navigation 112 and from other sensors 114. For example, during an approach to a landing zone, a helicopter may need to make a steep approach. Unlike a fixed wing aircraft that may approach a landing zone with the aircraft's nose pitched below the horizontal on a steep approach, a helicopter, or other rotorcraft, may approach the landing zone with the aircraft nose pitched substantially above the horizontal. The roll, pitch and yaw instruments within platform navigation 112 and other sensors 114 may detect the helicopter's nose-up attitude and send signals to synthetic vision processing system 110, which may rotate display cylinder 18B to match the helicopter attitude. Synthetic vision processing system 110 may adjust the location within the three-dimensional matrix (shown in
Display 116 may have a display control (not shown) which may be composed of soft-keys, touch screen, keypad or other similar input device. Display 116 may be a multi-function display (MFD) or a primary function display (PFD). This disclosure may refer to display 116 as a “display device.” The terms “display” and “display device” may be used as interchangeable nouns, unless context uses “display” as a verb. Synthetic vision processing system 110 may cause either the MFD or the PFD to display symbology, images or other information. For example, synthetic vision processing system 110 may cause the PFD to show cylinder display 18B, as shown in
Note that although the example of
For example, an aircraft in DYE conditions may need to operate by “whisker flying.” Whisker flying means operating by references only as far as sensors can detect. The term comes from animals such as cats, mice or cockroaches that have whiskers or antennae. A cat for example, may navigate a maze and will be able to know whether the cat can squeeze through a small opening before it is stuck. The cat's whiskers detect how far the cat is from a wall and how wide an opening is, even in total darkness. Similarly, a mouse may determine whether it can fit under a door or through a small opening by sensing the size of the opening with the mouse's whiskers. In the same way, an aircraft pilot may determine how to safely navigate through a valley, or series of buildings, by consulting the synthetic vision system. Depending on the circumstances, the aircraft pilot may adjust the size of the display, as described by this disclosure, to provide the best information to safely navigate and complete the mission.
Synthetic vision processing system 110, as shown in
Synthetic vision processing system 110 may subdivide a volumetric representation of the airspace in the region around the aircraft into a plurality of sub-cylinders (306). Though the example of
Synthetic vision processing system 110 may assign a color code to each sub-cylinder corresponding to each hazard location (310). In the example of
Synthetic vision processing system 110 may combine data from the plurality of sensors and translate into a composite, real-time, three-dimensional synthetic vision display signal (312). Synthetic vision processing system 110 may further transmit the display signal to display 116 (314). The display signal may cause display 116 to display a three-dimensional depiction of a region around an aircraft (320), where the display may include a display cylinder, such as display cylinder 18B depicted in
Display 116 may have a display control (not shown), which may receive inputs from an operator, such as a helicopter pilot or other crew member (322). The display control may send display control signal 129 to synthetic vision processing system 110 (324), which may further may translate display control signals 129 into one or more output signals. These one or more output signals may control a signal processing device to increase or decrease the size of display cylinder 18A.
Synthetic vision processing system 110 may translate the plurality of sensor inputs into a display signal (402) and transmit the display signal to display 116 (404). As noted above in
Display signal 128 transmitted to the display device 116 may cause the display device to display a three-dimensional depiction of a region around an aircraft (406). The three-dimensional depiction of the region around the aircraft may be a volumetric representation, such as a display cylinder as shown in the example of
Further in conjunction with the techniques of
The display signal 128 from synthetic vision processing system 110 may fix the display cylinder 18B to the aircraft coordinate location 16B and the aircraft attitude, as depicted in the examples of
Synthetic vision processing system 110 may also adjust the size of cylinder display in response to inputs by an operator. Synthetic vision processing system 110 may receive display control signals 129 in response to input from an operator to change the size of cylinder display 18B. The size may depend on the circumstances. For example, if landing on a hospital helipad where there may be buildings nearby, the operator may choose a smaller size than if conducting “whisker flying” operations in a valley.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, FPGA, solid state memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, general purpose graphics processor, high speed backplane, high speed. RAPID IO or PCIe, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperable hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware, such as real-time operating system software.
Various embodiments of the disclosure have been described. These and other embodiments are within the scope of the following claims.