1. Statement of the Technical Field
The inventive arrangements concern techniques to enhance visualization of point cloud data, and more particularly for visualization of target elements residing within natural scenes.
2. Description of the Related Art
One problem that frequently arises with imaging systems is that targets may be partially obscured by other objects which prevent the sensor from properly illuminating and imaging the target. For example, in the case of a conventional optical type imaging system, targets can be occluded by foliage or camouflage netting, thereby limiting the ability of a system to properly image the target. Still, it will be appreciated that objects that occlude a target are often somewhat porous. Foliage and camouflage netting are good examples of such porous occluders because they often include some openings through which light can pass.
It is known in the art that objects hidden behind porous occluders can be detected and recognized with the use of proper techniques. It will be appreciated that any instantaneous view of a target through an occluder will include only a fraction of the target's surface. This fractional area will be comprised of the fragments of the target which are visible through the porous areas of the occluder. The fragments of the target that are visible through such porous areas will vary depending on the particular location of the imaging sensor. However, by collecting data from several different sensor locations, an aggregation of data can be obtained. In many cases, the aggregation of the data can then be analyzed to reconstruct a recognizable image of the target. Usually this involves a registration process by which a sequence of image frames for a specific target taken from different sensor poses are corrected so that a single composite image can be constructed from the sequence. The registration process aligns 3D point clouds from multiple scenes (frames) so that the observable fragments of the target represented by the 3D point cloud are combined together into a useful image
In order to reconstruct an image of an occluded object, it is known to utilize a three-dimensional (3D) type sensing system. One example of a 3D type sensing system is a Light Detection And Ranging (LIDAR) system. LIDAR type 3D sensing systems generate image data by recording multiple range echoes from a single pulse of laser light to generate an image frame. Accordingly, each image frame of LIDAR data will be comprised of a collection of points in three dimensions (3D point cloud) which correspond to the multiple range echoes within sensor aperture. These points are sometimes referred to as “voxels” which represent a value on a regular grid in three dimensional space. Voxels used in 3D imaging are analogous to pixels used in the context of 2D imaging devices. These frames can be processed to reconstruct an image of a target as described above. In this regard, it should be understood that each point in the 3D point cloud has an individual x, y and z value, representing the actual surface within the scene in 3D.
Notwithstanding the many advantages associated with 3D type sensing systems as described herein, the resulting point-cloud data can be difficult to interpret. To the human eye, the raw point cloud data can appear as an amorphous and uninformative collection of points on a three-dimensional coordinate system. Color maps have been used to help visualize point cloud data. For example, a color map can be used to selectively vary a color of each point in a 3D point cloud in accordance with a predefined variable, such as altitude. In such systems, variations in color are used to signify points at different heights or altitudes above ground level. Notwithstanding the use of such conventional color maps, 3D point cloud data has remained difficult to interpret.
The invention concerns a method for providing a color representation of three-dimensional range data for improved visualization and interpretation. The method includes displaying a set of data points including the three-dimensional range data using a color space defined by hue, saturation, and intensity. The method also includes selectively determining respective values of the hue, saturation, and intensity in accordance with a color map for mapping the hue, saturation, and intensity to an altitude coordinate of the three-dimensional range data. The color map is defined so that values for the saturation and the intensity have a first peak value at a first predetermined altitude approximately corresponding to an upper height limit of a predetermined target height range. According to one aspect of the invention, the color map is selected so that values defined for the saturation and the intensity have a second peak value at a second predetermined altitude corresponding to an approximate anticipated height of tree tops within a scene.
The color map can be selected to have a larger value variation in at least one of the hue, saturation, and intensity for each incremental change of altitude within a first range of altitudes in the predetermined target height range as compared to a second range of altitudes outside of the predetermined target height range. For example, the color map can be selected so that at least one of the saturation and the intensity vary in accordance with a non-monotonic function over a predetermined range of altitudes extending above the predetermined target height range. The method can include selecting the non-monotonic function to be a periodic function. For example, the non-monotonic function can be chosen to be a sinusoidal function.
The method can further include selecting the color map to provide the hue, saturation, and intensity to produce a brown hue at a ground level approximately corresponding with a surface of a terrain within a scene, a yellow hue at an upper height limit of a target height range, and a green hue at the second predetermined altitude corresponding to an approximate anticipated height of tree tops within the scene. The method can further include selecting the color map to provide a continuous transition that varies incrementally with altitude, from the brown hue, to the yellow hue, and to the green hue at altitudes between the ground level and the second predetermined altitude.
The method also includes dividing a volume defined by the three dimensional range data of the 3D point cloud into a plurality of sub-volumes, each aligned with a defined portion of the surface of the terrain. The three dimensional range data is used to define the ground level for each of the plurality of sub-volumes.
The invention will now be described more fully hereinafter with reference to accompanying drawings, in which illustrative embodiments of the invention are shown. This invention, may however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. For example, the present invention can be embodied as a method, a data processing system, or a computer program product. Accordingly, the present invention can take the form as an entirely hardware embodiment, an entirely software embodiment, or a hardware/software embodiment.
A 3D imaging system generates one or more frames of 3D point cloud data. One example of such a 3D imaging system is a conventional LIDAR imaging system. In general, such LIDAR systems use a high-energy laser, optical detector, and timing circuitry to determine the distance to a target. In a conventional LIDAR system one or more laser pulses is used to illuminate a scene. Each pulse triggers a timing circuit that operates in conjunction with the detector array. In general, the system measures the time for each pixel of a pulse of light to transit a round-trip path from the laser to the target and back to the detector array. The reflected light from a target is detected in the detector array and its round-trip travel time is measured to determine the distance to a point on the target. The calculated range or distance information is obtained for a multitude of points comprising the target, thereby creating a 3D point cloud. The 3D point cloud can be used to render the 3-D shape of an object.
In
It should be appreciated that in many instances, the occluding material 106 will be somewhat porous in nature. Consequently, the sensors 102-i, 102-j will be able to detect fragments of the target which are visible through the porous areas of the occluding material. The fragments of the target that are visible through such porous areas will vary depending on the particular location of the sensor. By collecting data from several different sensor poses, an aggregation of data can be obtained. Typically, aggregation of the data occurs by means of a registration process. The registration process combines the data from two or more frames by correcting for variations between frames with regard to sensor rotation and position so that the data can be combined in a meaningful way. As will be appreciated by those skilled in the art, there are several different techniques that can be used to register the data. Subsequent to such registration, the aggregated 3D point cloud data from two or more frames can be analyzed in an effort to identify one or more targets.
3D point cloud data in frame 200 can be color coded for improved visualization. For example, a display color of each point of 3D point cloud data can be selected in accordance with an altitude or z-axis location of each point. In order to determine which specific colors are displayed for points at various z-axis coordinate locations, a color map can be used. For example, in a very simple color map, a red color could be used for all points located at a height of less than 3 meters, a green color could be used for all points located a heights between 3 meters and 5 meters, and a blue color could be used for all points located above 5 meters. A more detailed color map could use a wider range of colors which vary in accordance with smaller increments along the z axis. Color maps are known in the art and therefore will not be described here in detail.
The use of a color map can be of some help in visualizing structure that is represented by 3D point cloud data. However, conventional color maps are not very effective for purposes of improving such visualization. It is believed that the limited effectiveness of such conventional color maps can be attributed in part to the color space conventionally used to define the color map. For example, if a color space is selected that is based on red, green and blue (RGB color space), then a wide range of colors can be displayed. The RGB color space represents all colors as a mixture of red, green and blue. When combined, these colors can create any color on the spectrum. However, an RGB color space can, by itself, be inadequate for providing a color map that is truly useful for visualization of 3D point cloud data. A color map which is exclusively defined in terms of RGB color space is limited. Although any color can be presented using the RGB color space, such a color map does not provide an effective way to intuitively present color information as a function of altitude.
An improved point cloud visualization method can use a new non-linear color map defined in accordance with hue, saturation and intensity (HSI color space). Hue refers to pure color, saturation refers to the degree or color contrast, and intensity refers to color brightness. Thus, a particular color in HSI color space is uniquely represented by a set of HSI values (h, s, i) called triples. The value of h can normally range from zero to 360° (0°≦h≦360°). The values of s and i normally range from zero to one (0≦s,≦1), (0≦i≦1). For convenience, the value of h as discussed herein shall sometimes be represented as a normalized value which is computed as h/360.
Significantly, HSI color space is modeled on the way that humans perceive color and can therefore be helpful when creating a color map for visualizing 3D point cloud data. It is known in the art that HSI triples can easily be transformed to other colors space definitions such as the well known RGB color space system in which the combination of red, green, and blue “primaries” are used to represent all other colors. Accordingly, colors represented in HSI color space can easily be converted to RGB values for use in an RGB based device. Conversely, colors that are represented in RGB color space can be mathematically transformed to HSI color space. An example of this relationship is set forth in the table below:
Referring now to
In
The normalized curves representing saturation and intensity also have a local peak value at the upper height limit 308 of the target range. However, the normalized curves 404 and 406 for saturation and intensity are non-monotonic, meaning that they do not steadily increase or decrease in value with increasing elevation (altitude). According to an embodiment of the invention, each of these curves can first decrease in value within a predetermined range of altitudes above the target height range 308, and then increases in value. For example, it can be observed in
Notably, the peak in the normalized curves 404, 406 for saturation and intensity causes a spotlighting effect when viewing the 3D point cloud data. Stated differently, the data points that are located at the approximate upper height limit of the target height range will have a peak saturation and intensity. The visual effect is much like shining a light on the tops of the target, thereby facilitating identification of the presence and type of target. The second peak in the saturation curve 404 at treetop level has a similar visual effect when viewing the 3D point cloud data. However, in this case, rather than a spotlight effect, the peak in saturation values at treetop level creates a visual effect that is much like that of sunlight shining on the tops of the trees. The intensity curve 406 shows a localized peak as it approaches the treetop level. The combined effect helps greatly in the visualization and interpretation of the 3D point cloud data, giving the data a more natural look.
In
Referring now to
Referring now to
The relationship between
Referring again to
The color map in
Referring again to
The color map in
By selecting the color map in
Referring once again to
Notably, the second peak in saturation and intensity curves 404, 406 occurs at treetop level 310. As shown in
In order for the color map to work effectively as described herein, it is advantageous to ensure that ground level 305 is accurately defined in each portion of the scene. This can be particularly important in scenes where the terrain is uneven or varied in elevation. If not accounted for, such variations in the ground level within a scene represented by 3D point cloud data can make visualization of targets difficult. This is particularly true where, as here, the color map is intentionally selected to create a visual metaphor for the content of the scene at various altitudes.
In order to account for variations in terrain elevation, the volume of a scene which is represented by the 3D point cloud data can be advantageously divided into a plurality of sub-volumes. This concept is illustrated in
Each column of sub-volumes 702 will be aligned with a particular portion of the surface of the terrain represented by the 3D point cloud data. According to an embodiment of the invention, a ground level 305 can be defined for each sub-volume. The ground level can be determined as the lowest altitude 3D point cloud data point within the sub-volume. For example, in the case of a LIDAR type ranging device, this will be the last return received by the ranging device within the sub-volume. By establishing a ground reference level for each sub-volume, it is possible to ensure that the color map will be properly referenced to a true ground level for that portion of the scene.
In light of the foregoing description of the invention, it should be recognized that the present invention can be realized in hardware, software, or a combination of hardware and software. A method in accordance with the inventive arrangements can be realized in a centralized fashion in one processing system, or in a distributed fashion where different elements are spread across several interconnected systems. Any kind of computer system, or other apparatus adapted for carrying out the methods described herein, is suited. A typical combination of hardware and software could be a general purpose computer processor or digital signal processor with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which, when loaded in a computer system, is able to carry out these methods. Computer program or application in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following a) conversion to another language, code or notation; b) reproduction in a different material form. Additionally, the description above is intended by way of example only and is not intended to limit the present invention in any way, except as set forth in the following claims.