This invention relates to stereoscopic displays and more particularly relates to a display apparatus and method for stereoscopic presentation of position-sensing data.
Various types of radar systems are used in government and military applications, such as for early warning devices that show the approach of aircraft, missiles, ships, and land vehicles. Conventionally, a flat, circular CRT serves as the radar display, showing dots or “blips” for detected objects, updated with each sweep of the radar signal. Fiducial “rings” on the CRT surface help to indicate range increments. For many types of applications, this type of two-dimensional display has proved sufficiently usable for skilled operators who can interpret the displayed data.
Civilian air traffic control systems also use radar as a means for tracking and guiding aircraft movement. For this type of application, however, the limitations of the conventional CRT display are most readily apparent. Two dots appearing on the flat CRT screen may indicate aircraft that are at very different altitudes, for example. It requires highly skilled personnel to interpolate between the limited scope of the radar display CRT and the three-dimensional, real-world objects that are represented, particularly since the tracked objects are in motion. As air traffic continues to grow in volume, there are increased risks for mistakes that can jeopardize life and property.
Although the CRT only represents two-dimensional data, the radar system itself actually obtains three-dimensional data on detected moving objects. As shown in
There have been a number of solutions proposed for stereoscopic and three-dimensional imaging that can be used in radar avionic applications. For example:
It can be appreciated that there would be significant benefits to a display system that provided a stereoscopic, three-dimensional view of radar and tracking system data. Equipped with such a display, an air traffic controller could be provided with a view of the full volume of air space around an airport, for example. Such a display could use data from a single radar system to render a viewable stereo representation, rather than requiring that two separate radar systems provide two separate image sources, as is typically needed for conventional stereo image forming apparatus.
The present invention is directed to overcoming one or more of the problems set forth above. Briefly summarized, according to one aspect, the present invention provides a method for display of radar data comprising:
It is a feature of the present invention that it uses data from multiple radar scans by a single radar device to display stereoscopic data.
It is an advantage of the present invention that it provides an effective way to adapt radar for viewing in three dimensions. This data can then be used to provide a stereoscopic visualization of detected objects, using any of a number of stereoscopic and autostereoscopic displays.
These and other objects, features, and advantages of the present invention will become apparent to those skilled in the art upon a reading of the following detailed description when taken in conjunction with the drawings wherein there is shown and described an illustrative embodiment of the invention.
While the specification concludes with claims particularly pointing out and distinctly claiming the subject matter of the present invention, it is believed that the invention will be better understood from the following description when taken in conjunction with the accompanying drawings, wherein:
The present description is directed in particular to elements forming part of, or cooperating more directly with, apparatus in accordance with the invention. It is to be understood that elements not specifically shown or described may take various forms well known to those skilled in the art.
Forming a Stereoscopic Image
The present invention uses radar data from successive scans along with related tracking apparatus data in order to generate a stereoscopic image. In order to better understand the operation of the present invention, it is first useful to review how stereoscopic images are formed.
The term “stereoscopic image” implies that a pair of images is formed: one right eye image and one left eye image. This pair of images can be more precisely termed as a stereoscopic pair of images. For the description that follows, the term stereoscopic image can be considered to be equivalent to a stereoscopic pair of images.
As a general principle, a stereoscopic image is formed by presenting each eye with an appropriate two-dimensional image, wherein the image for the left eye is at a slightly different lateral perspective from the image for the right eye. Referring to
Techniques for generating a stereoscopic pair of images such as those represented in
In conventional practice, objects displayed three-dimensionally are represented within a volume by “wireframe” triangles and polygons. A three-dimensional wireframe is computed for each object, then transformed into a two-dimensional representation by projecting the wireframe shape onto a computed two-dimensional surface that is disposed at a suitable location. The wireframes are then assigned surfaces that represent the visible exterior form of the object. Imaging software then renders the surfaces, providing realistic textures, shading, and shadows. In this rendering, each pixel is typically represented by a data structure that provides color and distance information. For example, a 32-bit word may be assigned, with 8 bits for each color (Red, Green, and Blue, or RGB) and 8-bits for a z-axis distance that represents distance from the viewer. Objects having a short z-axis distance are close to the viewer and are rendered at a larger scale than objects having a longer z-axis distance. Thus, two objects having the same horizontal (x-axis) and vertical (y-axis) position are positioned relative to each other based on their z-axis distance. An object at a short z-axis distance may then block the visibility of another object at the same x- and y-axis position, but having a larger z-axis distance.
In early embodiments of visualization software, the computer central processing unit (CPU) had the task of performing the necessary calculations to provide two-dimensional renderings of three-dimensional data. More recently, dedicated processors from visualization processing vendors provide this function. For example, chip sets and dual graphics boards provided by Nvidia Corporation, such as the Quadro FX-1000 or FX-2000 hardware, relieve the CPU of these computation-intensive tasks. These processors are capable of generating both single two-dimensional images and stereoscopic left- and right-eye two-dimensional images using a wireframe model. Thus, a viewer can be presented with stereoscopic left- and right-eye images for display.
A three-dimensional appearance can be generated by moving the reference plane on which the two-dimensional image is formed from the three-dimensional wireframe image. By continuously moving this reference plane, an illusion of depth is thus provided by showing a textured, shaded, rotating three-dimensional object on a two-dimensional display. The type of view generated is commonly referred to as three-dimensional; however, this view is not truly stereoscopic. Creating a pair of two-dimensional views from slightly different perspectives at each reference plane location and presenting them to the observer in a stereo display allows rendering of a true stereoscopic image for viewing the rotating three dimensional object.
Conventional video games typically employ software such as Microsoft DirectX software for this image manipulation and representation function. Conventional CAD and medical imaging apparatus may employ tools such as OpenGL software for this purpose. For stereo rendering, a Quad Buffered Stereo Mode, supported by OpenGL software can be enabled. In many applications, a computer running this specialized software is set up for dual displays in a “clone” mode.
Thus, it can be seen that there are existing, commercially available tools for providing left- and right-eye stereo image pairs, for generating two-dimensional images from slightly different perspectives, as was noted with reference to
Radar Scanning
In order to understand how radar images are obtained, it is instructive to review how radar scanner 10 operates. Referring back to
As the radar beam is detected at aircraft 12, transponder apparatus 14 responds by transmitting data on altitude, air speed, type of aircraft, flight plan, and other information. This information is then available in addition to the range and azimuth positional data obtained. Using this capability, each radar scan gives sufficient range, azimuth angle, and altitude data to fix the location of an aircraft within a volume surrounding the radar system.
Procedure for Forming an Image
The present invention utilizes two or more successive sweeps of the beam from radar scanner 10 to generate a stereo representation of aircraft 12 location and trajectory. Then, employing conventional algorithmic techniques, additional information on trajectory can be calculated and stereo images from slightly different perspectives obtained, as was described with reference to
The overall procedure for forming images according to the present invention can be illustrated with reference to
For forming a stereoscopic pair of images, two slightly different laterally displaced reference points are used, as represented by R and R′ in
The first and second radar scans in steps (i) and (ii) may be temporally spaced at variable intervals. Multiple scans of the same moving object, at different times, allow a continuously moving icon to be rendered from the data. The predetermined volume is typically the volume scanned by the radar, such as cone 26 in
Steps (i)-(vi) above apply for each stereoscopic image obtained and are repeated in order to display the position of the continuously moving object, updating the position of the assigned icon accordingly. Reference point R may be outside of the volume, as shown in
To rotate an object as a stereoscopic image, changing the reference point of step (v) above, the imaging software changes the assignment of its reference plane, so that the three-dimensional icon is projected onto a pair of reference planes at slightly different lateral positions, thereby allowing updated two-dimensional images to be formed for left- and right-eye viewing at each position to which the reference planes are moved.
Using steps (i)-(vi) above allows an improved display capability relative to conventional radar CRT displays, allowing three-dimensional views of airspace from any of a number of different perspectives. Actual altitudes for aircraft 12 can thus be graphically represented and displayed from a number of perspectives.
Determining and Representing Direction
In addition to position, the method and apparatus of the present invention also allow display of aircraft 12 type and direction. Referring to
As was noted in the background material given above, altitude data is typically obtained from on-board transponder apparatus 14. In addition, information on the type of aircraft 12 is typically also available. This additional information makes it possible to graphically represent aircraft 12 type as well as direction, as shown in
Referring to
In addition to inventive use of icons 34, the present invention also allows the use of reference markings such as fiducial rings 16 in
In addition, various types of tracking pattern could be displayed. For example, the trajectory of an aircraft over an interval of time can be rendered to show a previous travel path by maintaining transparent vestigial images showing previous position, with these images increasingly transparent with time, creating a “ghosting” effect. In addition, forward projection of trajectories can be computed and rendered in a similar manner to predict an anticipated travel path, as an aid in detecting possible collisions, for example. In one embodiment, forward projection of an aircraft's course simply interpolates the anticipated travel path by extending the travel path information obtained with reference to
Color coding of objects and markers may also be useful for identifying large aircraft with more turbulent wakes, for indicating an aircraft type (such as private, commercial, or military) or for defining altitude corridors. Such optional tracking and display enhancement features could be enabled for all objects or markers, or enabled/disabled selectively by the viewer. For example, a “ghosting” or tracking pattern function or a predictive tracking utility that shows the most likely travel path may be helpful where air traffic is light to moderate; however, these enhancements could tend to obscure visibility in heavy traffic and thus would need to be disabled.
Display Apparatus
While the image forming techniques of the present invention could be suitably adapted to any of a number of different types of displays, there would be particular advantages to a display that provides good depth perception and is well-suited to three-dimensional display. A display system of this type that provides a virtual image using pupil imaging is disclosed in commonly-assigned U.S. Pat. No. 6,755,532 (Cobb), incorporated herein by reference.
Because it provides good depth perception and a wide field of view, the virtual image provided by autostereoscopic display apparatus 100 is particularly advantageous for the three-dimensional display requirements of the present invention.
Displayed Data
Autostereoscopic display apparatus 100 is particularly capable for the three-dimensional display requirements of the present invention. Referring to
Alternate Embodiment for Display of Doppler Radar Data
The present invention is well-suited for displaying aircraft and other fast moving objects that are detected from a single radar system. With slight modification, the apparatus and methods of the present invention can also be adapted for use with any of a number of types of radar systems, and with pairs of radar systems. For example, the present invention would be particularly useful for Doppler radar, widely used in tracking meteorological formations for obtaining weather data. In conventional “storm chaser” applications, multiple mobile Doppler radar units are mounted on moving vehicles that can then effectively track storm movement and development.
The output of a single Doppler radar unit 74 is a time sequence of images which are generally color coded to show reflected energy intensity or wind speed and direction, rather than a single “blip” as with an airport radar for air traffic control. That is, the Doppler target is very large compared to the radar wavelength and beam size. Motor vehicles using Doppler radar unit 74 can produce a two-dimensional image of a storm. With one vehicle to the left of a storm and one to the right, two separate two-dimensional sequences of images can be produced, as shown in
Each Doppler radar unit 74, 76 actually receives three-dimensional data (range, azimuth, and elevation) that is processed to present the two-dimensional image. Typically, a single Doppler radar provides a narrow scanning beam of about 0.5 degrees, both horizontally and vertically. As this beam sweeps around at a given elevation angle for a full 360 degree rotation or portion thereof, the Doppler radar senses return data for each azimuth angle based on range and/or intensity of the reflection. This data is then conventionally displayed on a two dimensional screen as a 360 degree swath (or corresponding portion) at some radius from the location of radar unit 74, 76, with the data color coded according to the strength of the reflected radar energy. The image updates as the radar sweeps around in azimuth, providing a continuously changing top-down view of storm clouds or similar phenomenon around the radar location. Doppler radar unit 76 on the right side of the storm provides the right-eye image. Doppler radar unit 74 on the left side of the storm provides the left eye image.
In the conventional Doppler mode, the displayed data can be coded to show wind speed and direction. Neither of these top down viewing modes is particularly useful for stereo viewing, however, because nearer cloud formations simply appear to be taller, due to the greater disparity of closer objects.
Unlike the air traffic radar of the
Referring to
The above describes how a single, two dimensional image is rendered using a single Doppler radar system, employing image processing software and techniques well known in the art. In like manner to the method described for conventional radar scanning, a stereoscopic image can also be formed using Doppler radar. Here, however, two Doppler radar units 74 and 76, as shown in
Two Doppler radar units 74 and 76, after standard composite processing, can provide a pair of two-dimensional images of a storm, as viewed from the side. Any horizontal disparity may need to be adjusted to get a comfortably fusible image, using techniques well known in the imaging arts. This disparity may vary from event to event, depending upon the location of radar units 74 and 76 and their distance from the storm. The level of adjustment necessary would depend, in part, on viewer preference and comfort.
Referring to
The method for obtaining a stereoscopic image using the arrangement of Doppler radar units 74, 76 as shown in
By using the three dimensional data from a given radar system or pair of systems and creating a projection of that data on a two dimensional reference plane as described herein, a two dimensional image from a specified viewpoint can be generated. By manipulating a reference plane and generating a series of two dimensional projected images, an airborne or land-based object or meteorological formation can be viewed from a preferred direction.
The invention has been described with reference to particular embodiments. However, it will be appreciated that variations and modifications can be effected by a person of ordinary skill in the art without departing from the scope of the invention. For example, alternative embodiments may use different types of display apparatus, including dual CRT monitors, LC displays, OLED displays, and others. The method of the present invention could be adapted for any number of radar types, both commercial and military, including, for example, Airborne Warning And Control System (AWACS) and Joint Surveillance Target Attack Radar System (JSTARS) apparatus.
Thus, what is provided is a display apparatus and method for stereoscopic presentation of position-sensing data.
Number | Name | Date | Kind |
---|---|---|---|
2434897 | Ayres | Jan 1948 | A |
2949055 | Blackstone | Aug 1960 | A |
3025514 | Alexander et al. | Mar 1962 | A |
3044058 | Harris | Jul 1962 | A |
3072902 | Bernstein et al. | Jan 1963 | A |
3200398 | Witt | Aug 1965 | A |
3213757 | Cardwell, Jr. | Oct 1965 | A |
3555505 | Srogi | Jan 1971 | A |
3636330 | Holeman et al. | Jan 1972 | A |
3737902 | O'Hagan et al. | Jun 1973 | A |
4027307 | Litchford | May 1977 | A |
4158841 | Wuchner et al. | Jun 1979 | A |
4602336 | Brown | Jul 1986 | A |
4805015 | Copeland | Feb 1989 | A |
4872051 | Dye | Oct 1989 | A |
4940987 | Frederick | Jul 1990 | A |
5128874 | Bhanu et al. | Jul 1992 | A |
5198819 | Susnjara | Mar 1993 | A |
5202690 | Frederick | Apr 1993 | A |
5402129 | Gellner et al. | Mar 1995 | A |
5619264 | Yoshimura et al. | Apr 1997 | A |
5647016 | Takeyama | Jul 1997 | A |
5659318 | Madsen et al. | Aug 1997 | A |
5692062 | Lareau et al. | Nov 1997 | A |
5724125 | Ames | Mar 1998 | A |
5825540 | Gold et al. | Oct 1998 | A |
5861846 | Minter | Jan 1999 | A |
6018307 | Wakayama et al. | Jan 2000 | A |
6055477 | McBurney et al. | Apr 2000 | A |
6119067 | Kikuchi | Sep 2000 | A |
6181271 | Hosaka et al. | Jan 2001 | B1 |
6186006 | Schmitz et al. | Feb 2001 | B1 |
6198426 | Tamatsu et al. | Mar 2001 | B1 |
6198428 | Sekine | Mar 2001 | B1 |
6208318 | Anderson et al. | Mar 2001 | B1 |
6225891 | Lyons et al. | May 2001 | B1 |
6333757 | Faris | Dec 2001 | B1 |
6483453 | Oey et al. | Nov 2002 | B2 |
6535158 | Wilkerson et al. | Mar 2003 | B2 |
6597305 | Szeto et al. | Jul 2003 | B2 |
6690451 | Schubert | Feb 2004 | B1 |
6750806 | Fischer | Jun 2004 | B2 |
6755532 | Cobb | Jun 2004 | B1 |
6809679 | LaFrey et al. | Oct 2004 | B2 |
6819265 | Jamieson et al. | Nov 2004 | B2 |
6834961 | Cobb et al. | Dec 2004 | B1 |
6909381 | Kahn | Jun 2005 | B2 |
7034742 | Cong et al. | Apr 2006 | B2 |
20010014172 | Baba et al. | Aug 2001 | A1 |
20020113865 | Yano et al. | Aug 2002 | A1 |
20020149585 | Kacyra et al. | Oct 2002 | A1 |
20020180631 | Alon | Dec 2002 | A1 |
20030004613 | Hahn et al. | Jan 2003 | A1 |
20030006930 | Lodwig et al. | Jan 2003 | A1 |
20030160862 | Charlier et al. | Aug 2003 | A1 |
20030160866 | Hori et al. | Aug 2003 | A1 |
20040156400 | Caplan et al. | Aug 2004 | A1 |
20040247175 | Takano et al. | Dec 2004 | A1 |
20050137477 | Kockro | Jun 2005 | A1 |
20050151839 | Ito et al. | Jul 2005 | A1 |
20050200515 | Cherniakov | Sep 2005 | A1 |
20050206551 | Komiak et al. | Sep 2005 | A1 |
20060103572 | DeAgro | May 2006 | A1 |
20060203335 | Martin et al. | Sep 2006 | A1 |
20060239539 | Kochi et al. | Oct 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20060220953 A1 | Oct 2006 | US |