Objects can be obscured from view in a variety of ways. For example, in a battlefield, smoke, chemical clouds (e.g., poisonous gas), and fog can obscure enemy combatants making navigation and battlefield assessments unmanageable. And, dust and fog can obscure a landing zone for an aircraft such as a helicopter. Visually removing these obscurants can create safer operating conditions in whatever the situation.
Systems and methods presented herein provide for imaging an object obscured by a material. In one embodiment, a system for imaging obscured objects includes a volumetric data source comprising three dimensional (3D) image data of a scene and a two dimensional (2D) image source comprising 2D image data of the scene. The system also includes a processor operable to process the 2D data and the 3D data to generate a model of a material obscuring an object in the scene from sensors providing the 2D data and the 3D data. The processor is further operable to refine the model with detection data of the material from the volumetric data source, to detect the material obscuring the object based on the refined model, to generate an image of the scene, and to remove data pertaining to the obscuring material from the image to reveal the object in the image.
The various embodiments disclosed herein may be implemented in a variety of ways as a matter of design choice. For example, some embodiments herein are implemented in hardware whereas other embodiments may include processes that are operable to implement and/or operate the hardware. Other exemplary embodiments, including software and firmware, are described below.
Some embodiments of the present invention are now described, by way of example only, and with reference to the accompanying drawings. The same reference number represents the same element or the same type of element on all drawings.
The figures and the following description illustrate specific exemplary embodiments of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within the scope of the invention. Furthermore, any examples described herein are intended to aid in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited examples and conditions. As a result, the invention is not limited to the specific embodiments or examples described below.
The image processor 110 can be configured in a variety of ways as a matter of design choice. For example, the image processor 110 may be a remote computer system receiving data from the volumetric data source 102 and the 2D data source 104 through a communication network 101 such as the Internet. Alternatively or additionally, the image processor 110 may be co-located with the volumetric sensor 103 and the 2D sensor 105 to provide on-scene image processing to reveal the object 107. Accordingly, the image processor 110 is any system, device, software, or combination thereof operable to process 3D imaging data with 2D image data to generate an image of a scene and reveal an object in that image that is obscured from view. In other words, the image processor 110 unobscures an object in an image. Other exemplary embodiments are shown and described below in greater detail.
As used herein, imaging data is generally data that can be used to generate an image whereas image data generally regards digital data in the form of an image. However the invention is not intended to be limited to various types of forms of imaging/image data as other forms of data may be used including thermal and chemical data. In this regard, the image processor 110 is operable to process multiple forms of data (e.g., data fusion) to render an image of the object 107 unobscured by the material 106.
In
Again, the processor 110 uses the data from the data sources 102 and 104 to generate a model of the material 106 obscuring the object 107. This model is used to divine the shape, nature, and/or type of the material 106 obscuring the object 107 so that the material 106 can be effectively removed from an image generated by the processor 110. The processor 110 refines the model, in the process element 152, based on detection data of the material 106 obtained from the volumetric data source 102. For example, a lidar system may be operable to generate a volumetric representation of the material 106 which includes a representation of the surface area of the material 106. Then, the lidar system and/or another detection system may also gather data pertaining to the makeup of the material (e.g., chemical, biological, etc.). This surface area data and/or material composition data may be fed to the processor 110 via the volumetric data source 102 so as to refine that model and determine its physical and optical characteristics so as to detect the material, in the process element 153.
If the material cannot be readily detected in the process element 153, then the processor 110 further refines the model with additional data about the material 106, in the process element 152. Otherwise, when the processor 110 detects the material, the processor 110 generates an image of the scene, in the process element 154, using for example the 2D data from the 2D image source 104. The processor 110 then removes the data pertaining to the material 106 from the image to reveal the object 107 in the image, in the process element 155. In other words, the processor 110 in essence erases the material 106 from the generated image so as to reveal a clearer view of the object 107 in the image.
The processor 110 can operate in real time so as to produce a video of the scene. For example, the processor 110 may generate a number of images with each image operating as a “frame” in a video stream. In this regard, the processor 110 may continually refine the model and “erase” data pertaining to the obscuring material 106 in the scene so as to show any solid objects, moving or not, in the scene and thus in the generated video. Other exemplary embodiments are shown in greater detail below that point out how an object is revealed in a scene and the how the image is subsequently rendered.
Generally, the processor 110, using 3D sensing data and 2D sensing data, produces a parametrically rendered 3D “world model”. This processor 110 is particularly relevant for processing lidar and/or radar data along with camera imagery through a volume (i.e., a 3D scene) having degraded light-propagation attributes. Utilizing information from multiple data streams, surface model properties may be accumulated and used within the 3D world model to provide subsequent image renderings with illumination, orientation, volumetric parameters, and/or other attributes that differ from the physical conditions during which the source data was collected.
Lidar return data can be collected by a receiver and a range response of the lidar system can be computed. The range response G(R) of the lidar system (illustrated in
After compensating for the system range response G(R), the measured sensor signal (e.g., retained by the volumetric data source 102) may be compensated for the range response function. The resulting signal series (element 202 of
Hard targets that are much larger than the laser beam generally result in a “shadow” in the signal series since laser light from later times than the round trip time to the hard target cannot be generated by scattering from behind the target. In other instances, the laser beam may be larger than a hard target (e.g., telephone lines, power cables, guide wires, etc.). In these instances, the distributed scatter signal may still be observable at distances that are further than the hard target.
In general, round-trip range-dependent optical transmission is dependent on attenuation processes along the laser path. This can be written as
T(R)=e[−2∫
where aabs (r) generally refers to the intrinsic background absorption, ascat(r) generally refers to the scattering related attenuation that also results in backscatter to the receiver, and aobs(r) generally refers to the obstruction related attenuation of the laser beam for the cases where the laser beam is partially obstructed by smaller hard targets.
Since no light propagates past the surface object, there are no signals from longer ranges. Given the range dependent return signal, the optical attenuation can be calculated over a range out to the surface as exemplarily illustrated in
The range dependence on the received signal should have a component that is proportional to T(R) to account for both the outgoing and incoming light along the beam path. The range dependence should also be proportional to the ascat(r) since the proportion of light backscattered towards the receiver depends on the density of scatters and their optical cross-sections. Provided that some information is acquired regarding the background absorption through the air, the signal series may be used to extract the range dependent scattering attenuation ascat(r).
Hard target returns sensed in the lidar data may be used to identify the 3D positions of surfaces. And, given the volumetric attenuation distribution obtained with the lidar system, 2D image data collected from a camera (e.g.,
The 3D world model thus accumulates surfaces and surface properties and is updated, for example, to include emission properties of surfaces based on the registered corrected 2D imagery as well as the return intensities associated with the point clouds. If the 2D imagery occurs in a long-wavelength infrared (LWIR) band, the surface emissions may be primarily thermal in nature, and an angular emission pattern and surface effective temperature may be extrapolated from the data.
In the case of visible radiation (e.g., where emissions are due to solar illumination), surface emissions may be used along with the known information on solar position, calculated shadows from structures, and atmospheric scattering distributions to extract parameters for a Bidirectional Reflectance Distribution Function (BDRF) model for detected surfaces. These parameters are held within the world model and used for image rendering.
Since image rendering is based on the measured surface properties after corrections for atmospheric distortions or attenuation, renderings can be simulated for conditions differing from the conditions when the data was taken. For example, the vantage point for the rendered image may be different than the vantage point of any of the sensors used to collect the data. Illumination patterns or colors may be chosen to provide surface shading that best displays contrast for important observable features. And, thermal emissions in the IR may be simulated using arbitrary temperatures. The world model may be continuously updated using data accumulated from a sequence of orientations and positions so that the resulting renderings provide higher quality images than could be produced from single frames of data.
The 2D imagery can include multiple images at multiple wavelengths and can even include polarization data. The 3D imagery can include Angle-Range Point Clouds from lidar, radar data (e.g., millimeter wave), or combinations thereof. Accordingly, the invention is not intended to be limited to any particular type of data. General processing nodes are shown and described in
The processing node 403 in
In one exemplary embodiment, the collection model may be a collection of accessible data that includes the vector direction and solid angle Field of View Information (FOVI) for each pixel from a camera and each lidar return from a lidar system.
relative to the platform 425's reference frame at a time t. The kth data measurement (either a lidar waveform of a lidar system, or a pixel from an imaging system) is in a direction
relative to the overall sensor direction at the time t.
Generally, in an arbitrary world reference system, the jth ′ time-dependent position and orientation may be described by a vector Pj(t) and a rotation matrix Rj(t). With these parameters described, the direction component of each FOVI (Field of View Information) data element associated with kth data collection from the jth sensor 426 is provided along a path defined by set of positions:
where the time tk is the time associated with the kth data collection and η is the length along the path, as illustrated in
If the sensor system is mounted on a gimbal, the data needed to formulate
at time t is obtained from the gimbal control system. For a camera, each of the vectors
may actually be constant in time, whereas a lidar system may provide angular scanner information so that the directional vectors
may be determined for waveforms collected from each laser shot. The platform 425's position vector {right arrow over (P)}j(t) and rotation matrix Rj(t) may be determined from the platform 425's navigation systems (e.g., with respect to land 427). Since data is collected from multiple systems (e.g., the platform 425's navigation systems, gimbals, lidar systems, etc.), interpolation or extrapolation of the data streams may be necessary so that the vector and rotation parameters may be determined at the time associated with a data collection. Time synchronization of data extracted from multiple systems is important where real time analysis is performed. For example, if navigational updates are less frequent than processing of lidar waveform returns, extrapolation of navigational parameters may require the use of Kalman filters or other methods of physical modeling so that sufficiently accurate FOVI directional information may be obtained.
In practice, the data collection model also manages sensor specific calibration or processing functions. For imaging sensors, this may include sensor non-uniformity correction (NUC), in which individual pixel responses are measured and used to produce calibrated signals having the same response to optical fluence for each pixel. Many other well-known methods of processing imaging arrays may be used that require calibration that is specific to the sensor. For example, pixels may be categorized based on their performance, so that “dead pixels” may be eliminated from processing.
For 3D sensors that rely on temporal waveforms from active system for extracting range coordinates (such as lidar), calibration parameters related to data collection may include timing offsets and delays, waveform data sample rates, range response functions, and baseline correction data. Timing offsets and delays include timing data for outgoing pulses and relative system delays between one or more channels measuring outgoing signals and channels measuring incoming signals. These offsets and delays, along with waveform data samples rates, are used to convert signal delays into range distances.
Active systems generally have a range response which may be defined as a range dependent ratio of incoming signal to outgoing signal in the absence of any attenuation. For simple ideal monostatic lidar systems, a range response may have the form 1/r2, where r is distance away from the sensor. However, in practice the shape of this response may be more complex due to vignetting and other optical effects. The signal return strength for active system may be calibrated by dividing a range dependent return signal by the range dependent response. Thus, the data collection model uses the context of the specific measurement hardware to process raw data and produce data that has been tagged with orientation, position, or time data so that it can be fused into an accumulated world model.
In processing node 402, the processor 110 updates the collection model. During the course of operation of the processor 110, parameters associated with the interpretation of collected 2D image data and 3D volumetric data may need to be updated with external configuration data (processing node 401). One example includes using global positioning system (GPS) and inertial measurement unit (IMU) inputs to update the position of one or more sensor elements providing the data to the processor 110 (e.g., the sensors 103 and 105 through their respective data sources 102 and 104). Additionally, the processing node 402 may include adjusting calibration parameters to compensate for varying source parameters in active sensing systems.
To illustrate, the collection model may be updated as the sensor 426 moves from a point A (at time tA) to a point B (at time tB). In this case the sensor position has changed, and the sensor pointing direction may have changed as well. When the sensor is at point A, the collection model includes the sensor position and its related sensor pointing information. When the sensor is at point B, the collection model is given by the position of the sensor and its related pointing information. Updating the collection model means associating the new sensor position and pointing information at time tB from the previous sensor position information from time tA.
As discussed earlier, the data collection model may be updated simply by updating the platform 425 position vector, platform 425 orientation, sensor 426 orientation, and relative sensor data collection directions for each subsequent data collection. Additionally, monitoring of sensor health may be used to identify imaging pixels with poor performance which should be eliminated from analysis. For lidar systems, laser energies from each pulse may be individually measured so that return waveforms may be scaled relative to the laser signal energy used.
To illustrate for a moving platform 425 (e.g., a helicopter or a plane), continued updates to the collection model, a volume propagation model (processing node 407), a hard surface position model (processing node 408), and potentially even viewing parameters (processing node 413) depend on the position and orientation of the platform 425. GPS and/or IMU data may be collected and time tagged for association with collected 2D image data and volumetric data. Additional dynamic data that may affect the interpretation of the accumulated data can be obtained from other external sources to support data processing.
Updates to the surface and volumetric models come from the incoming data. As new data arrives, those data are used to augment/update the existing volume or surface models. Data that are determined to be part of the hard surface are used to update the surface model, which is typically treated as a static system. As an example, a surface model may be developed from a set of 3D points. When a new data point is collected it is associated with a set of existing points.
The new 3D point is added to the hard surface via an updating step which may be a simple replacement of old data with new data, or a weighted blending of the new data point with the previously collected data in that region. In this case the data could be the positional coordinates of the points in the region, or the average pixel intensities for those points. In this case the volumetric representation of the obscurant is treated as something that evolves over time. As new data are collected, the associated volumetric data (density, temperature, or other scalar/vector metrics) are updated to represent the current state of the evolving system. For a given point [X, Y, Z] the collection measured at time tB may be different from its value at point tA. In this case, the data would be used to modify the volumetric field typically by replacing the scalar value at time tA with the value at time tB.
In processing node 405, the processor 110 collects 2D image data from a data stream that maps at least one scalar to each of a series of angular coordinates. The angular coordinates may be specified by the previously described
vectors. This 2D image data may be camera data but could also include intensity data collected from an angular scanning detector. The 2D image data may also include polarization or spectral information associated with angular coordinates. The 2D image data may include data from multiple 2D image data sources, including historical image data (e.g., an image a scene taken previously from a particular know perspective/vantage point). These data sources could originate from multiple devices configured at separate locations and sensitized to different wavelength bands or different polarizations, or other physical phenomena (e.g., charged particle flux, pressure, temperature, etc.). The 2D image data may be collected in multiple frames at a series of times (e.g., video) so as to provide a video stream of the scene without the obscurant and/or to extract individual frames for rendering images without the obscurant.
Scalar values associated with angular coordinates are often related to electromagnetic energy received at one or more detectors, but the means for collecting that energy may include any number of architectures associated with different phenomenology. However, there is no requirement that received radiation be electromagnetic in nature. Given the appropriate receiver, a 2D image could be formed using acoustic energy, charged particles, or the like. The 2D image data processing node 405 may utilize the data collection model to produce calibrated data that is associated with FOVI specified in a world reference frame.
In processing node 406, the processor 110 collects volumetric data from a data stream that maps a depth sequence to each of a series of angular coordinates. A depth sequence is a mapping of one or more parameters to a series of distances from a specified position. Examples include time series data collected from a lidar system, in which volumetric back scatter along a laser beam path results in a received time dependent signal. By interpreting the delay time for each signal in the time series as the round-trip time of flight for an optical pulse, the time series is modified for use as a distance dependent signal.
Following similar principals, data collected from any active imaging system, such as a radar or a sonar system, may also be used as the volumetric data in the volumetric data source 102. Volumetric data may also be collected through tomographic techniques. In such a case, the organization of the volumetric data into angular and depth coordinates may be the result of computational processing on detector data. Other techniques may include imaging systems such as plenoptic (light field) cameras or those systems that utilize triangulation from multiple perspectives to generate 3D data.
In processing node 411, the processor 110 initiates the world model by including geometric descriptions of surfaces of solid objects. These descriptions may be in the form of multiple primitive shape descriptions, point cloud clusters, spline surfaces, or other parametric representations. The surfaces include at least one other non-geometric property description, including parameters such as color, emissivity, or reflectivity. More advanced descriptions may include parameters for spectral or polarization related reflectivity or emission models. Prior to data collection, the world model may be initiated to include an initial ground representation if a priori data (e.g., Digital Terrain Elevation Data, or “DTED”) is available. Previously collected data on surfaces may be included in the initial world model and initial data may be tagged with confidence factors related to the confidence as to whether that data is still valid.
In processing node 407, the processor 110 collects volume data to update a volume propagation model. For example, a lidar return signal sequence includes backscatter signals from particulates within the laser beam path. The signal sequence may also include a signal corresponding to the reflections from a hard target that blocks the continued propagation of the laser beam. Analysis of the back-scattered signal can be used to calculate a spatially dependent optical attenuation.
For example, the received backscattered signal from a monostatic lidar system may have the functional form ΦV(r)=G(r)βπ(r)exp(−2§0Tαext(ξ)dξ), where G(r) is a range dependent calibration parameter specific to the optical system, r is the range or distance away from the receiver, βπ(r) is the range dependent volumetric backscatter coefficient (e.g., units of m−1sr−1), and αext(r) is the range dependent total optical attenuation within the sampled volume.
With this basic propagation model, estimates of the parameters βπ and αext within the interrogated volume are maintained over time. Generally, βπ(r) and αext(r) are to be extracted from one or more lidar signal returns along different laser paths. If either αext or βπ are known, the other is readily computed. However, extraction of both parameters from a single lidar waveform, generally requires additional information. For example, in many media αext and βπ may be related by a power law relationship. If it is assumed that βπ(r)=βαext(r)k, where B and k are constants, both αext and βπ may be solved for as a function of range along each laser propagation path. Prior to data collection, the parameters B and k may be chosen as constant parameters based on previous phenomenological measurements.
During or after data collection, additional refinement on αext and βπ may be obtained by allowing B to have position dependence through incremental variations from its initial constant value. When lidar waveforms are obtained from a moving platform, multiple lidar beam paths may pass through the same volume of air from different angular perspectives. If the lidar measurements are taken with relatively small time delays, a constraint may be added forcing the measured αext and βπ to be equivalent from both perspectives.
To illustrate, in
The volumetric propagation model may be used at any state of its evolution to provide simulated backscatter and transmission along any path through the previously interrogated volume. Using standard volumetric interpolation algorithms (which may include some spatial filtering), spatially distributed αext and βπ measurements may be used to estimate αext and βπ as a function of distance along any specific path through the volume. This allows for the estimation of the transmission along any path and at any desired wavelength, given a proper understanding of the optical transmission model at the desired wavelength.
In processing node 409, the processor 110 updates a hard surface position model by retrieving volumetric data that is generally bounded by hard surfaces. In some cases, where hard surfaces are much smaller than illumining sources (e.g., in active lidar or radar systems), hard surfaces may be identified within the detected bounding volume. For active detection, the hard surfaces may be represented by point clouds extracted from time series data. The processor 110 may employ algorithms that recognize point cloud clusters as individual surfaces to create surface geometry representations with a surface position model. The update may be performed by initially combining point clouds with previous point clouds before improving the surface identification steps. Alternatively or additionally, perturbations to previous surface representation may be developed from incremental sets of point clouds. In some instances, contrast in 2D images data may be used to identify 3D surface edges.
The point clouds can be obtained from lidar waveforms by associating each time in a time series with a range and identifying the range at which the signal exceeds some threshold. The threshold itself may be range dependent. Given this range, η, a hard target intercept point may be given by the previously presented equation:
A large number of points in a point cloud may be grouped together and fit to a surface using standard lidar processing techniques. Surfaces may be simple flat facets or more complicated spline fit curves. These surfaces, specified in a world coordinate system, are catalogued. In addition to surfaces, highly scattering volumes from many hard surfaces that cannot be individually resolved may also be catalogued (e.g., trees, shrubs, etc.). Catalogued elements may have additional tag attributes, such as color, reflectivity, and time of last detection, which can be populated with data as it becomes available.
In one example of fitting a surface (e.g., with a 50 m by 50 m dimension) to incoming lidar data,
where h+x represents the height posting in the adjacent bin in the positive x direction, h0 is the mean of the five lowest points in the bin 442, and σ is the variance of the five lowest points in the bin 442. The entire surface 440 is continually refined as iterated updates are applied on a periodic basis. The heights for each bin 442 are then used as the vertices of a surface object and subsequently rendered in, for example, OpenGL.
Then, this equation represents an approximation to a Jacobi-style iteration whose fixed point is the solution to the following partial differential equation:
which itself is a Euler-Lagrange equation associated with the stationary point of the function:
The functional balances data (h0) and its reliability (σ) with a smoothing term and the iteration causes it to converge to its minimum.
As point cloud data is analyzed, distances of point positions to existing catalogued surfaces may be calculated to determine if a point should be associated with an existing surface or element. Alternatively, new points may be accumulated until additional surfaces are identified and catalogued. Some points that are not associated with surfaces may be determined to be the result of noise and rejected. An alternative approach may also be used, where the points from the point clouds are individually catalogued without attempting to develop surface representations. In this case, some clustering of points within close proximity may still be performed.
In processing node 404, the processor 110 registers volumetric data with 2D imagery. In doing so, the processor 110 extracts hard target positions from the volumetric data and aligns them with the 2D image data in time and space. This registration may provide, for example, an associated 2D image pixel intensity (with associated spectral band) to each angular coordinate associated with the volumetric data. Alternatively or additionally, each pixel intensity within the 2D image data may be associated with an angular coordinate from the volumetric data. Of course, interpolation schemes may be used to associate interpolated intensities from the 2D image data with angular coordinates associated with the volumetric data, and the angular coordinates and volumetric data may be interpolated as well. In any case, at the end of this process, a set of angular coordinates with volumetric information is associated with processed data extracted from 2D image data.
For example, in
In processing node 410, the processor 110 corrects 2D imagery intensity data for 3D distortion effects using the updated volume propagation model. In this regard, the processor 110 corrects surface emission and orientation properties of surfaces for propagation distortions. This process may include determining the angular coordinate dependent transmission to registered surfaces for 2D image data based on an accumulated volumetric representation of attenuation, and dividing image intensity data by this transmission. The resulting corrected image data can then be registered to the volumetric hard target data.
For example, after the points 452 within a point cloud 454 have been tagged by specific pixels 450 that viewed those points at specific times, additional information may also be tagged to the points 452. Using the volume propagation model, the transmission between a point 452 and the pixel 450 may be calculated. A point emission may be calculated for each point 452 (or small cluster of points) by dividing the intensity J received at the pixel 450 by the calculated path transmission T 453 and the number of points N intercepting the pixel field of view 451. This emission value can also be used to tag points 452 from the point cloud 454.
In processing node 408, the processor 110 updates a hard surface optical source model. The purpose of this functional node is to provide parameters to surfaces that allow modeling of surface emissions. Since emissions from the points within the point cloud are tagged with emissions from multiple vantage points (typically from different pixels during platform motion), a directional dependence to the emission may be extracted. If passive illumination conditions are known (e.g., from the solar position at the time of measurements), reflectance parameters for each target may be updated. This may include refining parameters for a surface bidirectional reflectance distribution function (BRDF), spectral or polarization dependent reflectance parameters and/or thermal emission parameters. By using previous imaging and reflectance measurements as measurements of surface properties, the processor 110 may then render the surfaces under different illuminating and viewing configurations than were used to collect the initial data. The surface optical source models are then contained within the world model.
In processing node 413, the processor 110 provides viewing parameters that include the positions and orientations used in rendering simulated imagery. These viewing parameters may be selected by a user, generated by a dynamic algorithm, or some combination thereof. For a moving platform, the position and orientation may be a current or anticipated position and orientation of the platform or a position and orientation from which a simulated platform may be viewed within the context surrounding physical elements. If the rendering is to be viewed with a binocular viewing system, viewing parameters associated with two positions and orientations may be used. Viewing parameters also include illumination parameters which may be different than physical illumination during measurement. For example, even if imagery is obtained during nighttime operations with infrared cameras, viewing parameters may include simulated daytime illumination at visible wavelengths.
In processing node 412, the processor 110 updates the world model with updated hard surface optical source models, as well as updated propagation models specifying volumetric parameters. The processor 110 may also update the world model with time dependent attributes, such as the position and orientation of the sensing platform. In some cases, the world model may include confidence levels for multiple hypotheses on hard target surface positions and properties so that additional information accumulated over time may be used to improve the confidence in accumulated data. Confidence levels may also be used for a statistical analysis on a hypothesis for volumetric atmospheric degradation parameters.
The world model may include its own catalogue of spatially distributed elements. Alternatively or additionally, the world model may manage links to catalogues of elements that are managed by the volumetric propagation model, the hard surface optical model, or the hard surface position model. However, its function is to manage access to estimated renderable element properties at given times. An example of such is shown in
In
The world model may include algorithms that would eliminate or rate elements that have been retained within the hard surface model. For example, point cloud points 473 that have a density too low to be clustered and discriminated from noise may be retained during data collection, since additional data collections may provide an increase in the local density of points revealing a hard object. However, the world model may choose to eliminate these un-clustered points.
Also, previously observed surfaces at the time in
Also illustrated in
In processing node 414, the processor 110 renders an image. This rendering provides a synthetic view of the world model using the viewing parameter provided by the processing node 413. The rendering may be made for monocular or binocular viewing (e.g. head-down or head-up display, respectively). The rendering may be produced at near real time at steady time intervals utilizing accumulated volumetric and 2D data from earlier times. Rendering may also be performed as a post processing analysis of previously collected data. Rendering may include options for shading with a simulated illumination source and/or allow each surface to emit its own light, depending on desired contrast features.
It should be noted that the discussion in
Thus, with above general concepts in mind, the processor 110 can propagate a world model of a scene for obscurant characterization. For example, for a single outgoing pulse in a “flying spot” lidar, a small number of ranges (e.g., between 1 and 5) may be recorded for the returning energy along the same “line of sight” (LOS) vector to indicate hard target returns. The waveform returned from a single laser pulse in that lidar also encodes information about the concentration of obscuring materials along the path of the return pulse. For example, a lidar return waveform may demonstrate the detection of a hard target through a distributed obscurant, such as dust, fog, or smoke, because the obscurant can be distinguished from the hard target. Full waveform processing therefore provides for the concentration of the dust to be determined and utilized for greater suppression and to correct the hard target return. Full waveform processing thus enables sophisticated obscurant mitigation techniques (e.g., in Degraded Visual Environments, or “DVEs”).
In addition to suppressing returns from obscurants, the processor 110 of
To determine the optical parameters (e.g., optical thickness and associated visibility) a propagation model is exploited. The propagation model can be implemented based on certain assumptions. For example, a wave propagating in a medium will undergo scattering and attenuation. If multiple scattering can be ignored, then the solution to a back-scattered power returned at range r is:
where C is a constant to account for aperture, transmitted power, etc., β(r) is the volume back-scatter and α(r) is the volume extinction (e.g., optical attenuation). To simplify the expression, define
S(r)=ln(r2P(r)), (8)
such that
S(r)=ln(C)+ln(β(r))−2∫orα(q)dq, (9)
which can be written as the differential equation
If β is constant, meaning the atmosphere or dust was homogeneous and uniform, then a is the slope of the signal return. This assumption has, however, limited applications and usually a power law relationship between α and β is assumed as follows:
β(r)=B(r)α(r)k. (11)
Further assumptions can be made with the power law assumption. For example, k can be made constant throughout the scene as well as B. If the size of the scatterers changes and/or the concentration of these last two assumptions does not hold, the level at which they are violated is generally unknown. Accordingly, this assumption results in:
This differential equation results in the following solution for α:
Then, if this equation is integrated “backwards” the processor 110 arrives at:
where rm is the lidar's maximum range. This backward formulation is generally stable to perturbations in the recorded signal.
Once the optical extinction parameter is determined, the processor 110 can compute the visual range V from the optical extinction coefficient:
Additionally, the integral of the optical extinction yields the optical thickness of the material 106. The optical thickness of the material 106 that is between the sensors 103 and 105 and a hard target (e.g., the object 107) affects the ability to observe things for both active and passive sensors. Thus, if the processor 110 can characterize the material 106 in terms of optical thickness and/or visibility, then the processor 110 can correct the detected signal to recover an expected “clear air” result.
To extract physical observables more directly related to the material concentration, the model is exploited. For example, a laser light passing through a cloud can be modeled as:
wherein I(r) is the observed intensity return as a function of range r, c(r) is the local concentration of the cloud, I0 is the intensity of the laser pulse, Aap is the aperture area at the detector, f is the gain associated with scattering in the reverse direction, σc(r) is the proportion of light per unit length scattered locally, αc(r) is the proportion of light per unit length absorbed locally,
is the fraction of light transmitted beyond r with the factor of 2 in the exponent in the above equation capturing the effect of extinction in both directions.
One assumption to the model regards light propagating coherently until it is reflected back to a receiver (e.g., a detector), at which point the falloff is proportional to the inverse of the square of the distance. Another assumption regards scattering and absorption of light being proportional to its concentration. Determining scattering and absorption coefficients directly is, however, not necessary as the optical extinction can be calculated from the data source 102 of
o(r)=(α+σ)c(r). (17)
And, the optical extinction has units of inverse length. So, its reciprocal represents a length scale associated with the extinction of light passing through the material 106, yielding a simplified equation of:
Due to the effects of scattering and absorption, K is therefore assumed to be a material property of the substance in the cloud.
Rewriting Equation (18) provides a way to recursively represent the defined optical extinction as:
Accordingly, the optical extinction is directly related to the observed intensity and rescaled to normalize away spherical spreading. The optical extinction also has a monotonically increasing amplification factor that reverses the effect cumulative extinction had in generating the observed intensity profile.
Equation (19) can be rewritten to provide a closed form solution as:
where the left hand side of which can be written as a total derivative as:
Integrating Equation (21) yields:
for some constant of integration C. Then, by evaluating the behavior of the expression when r=0, C is generally equal to “1”. Accordingly,
Substituting Equation (23) into Equation (19) yields a closed-form solution to Equation (18) as follows:
From there, Equation (24) can be integrated backwards to arrive at:
where rm is the lidar's maximum range. This backward formulation is stable to perturbations in the recorded signal.
Next, the processor 110 can determine model parameter K from intensity profiles. For example, in Equation (18), K is an unknown parameter. Rewriting Equation (23) results in K as follows:
and K satisfies the inequality:
for all r>0. However, this inequality may only be valid for intensity returns from the cloud and that r would fall short of including returns from hard targets behind the cloud. Once the optical extinction parameter is determined, the visual range V can be determined via a constant threshold K that an observer needs to distinguish background from the object 107 and by the extinction coefficient of Equation (15).
The processor 110 then determines K from a partially occluded hard target return (e.g., from the object 107). For example, in order to use Equation (26) to determine K, an estimate for the cumulative optical extinction term may be needed. One way to yield such an estimate is to compare the return from a hard target in the absence of a cloud Ih with the return from the same target partially occluded by cloud I*h such that K can be estimated as:
where R falls somewhat short of the bin in which the hard target return is located.
Then, K can be determined from dense cloud profiles. For example, in the case of a particularly dense cloud through which it can be assumed that essentially no light is transmitted, the inequality of Equation (27) is approximately an equality. Accordingly, K can be computed as:
Equation (29) can be seen as a special case of Equation (28) where the target is completely occluded. In comparison with determining K from a partially occluded hard target return, this method is simpler to exploit in practice because it does not require correlating returns from two profiles. Rather, a single return is used. However, it may only be applicable to profiles where complete extinction has occurred.
Alternatively or additionally, the processor 110 may determine K from the shape of reconstruction. For example, the processor 110 may determine that K involves fitting its value to the shape of reconstructed profiles. This generally requires some data other than the intensity returns to verify a result. Here, the processor 110 could compare the reconstruction to, for example, a video captured of the cloud in profile to ensure that the cloud to be removed from an image is indeed data pertaining to the material 106.
In one embodiment, the processor 110 can employ minimization to estimate optical extinction from lidar returns. For example, the lidar equation has the following form:
where P is the observed power at range r, β is the coefficient of backscatter, ε is the optical extinction of the obscurant material 106, and Ks, is a system constant that depends on the pulse energy and area of a lidar receiver. Given observations P(r), Equation (30) is insufficient to uniquely determine both backscatter and extinction simultaneously. So, a power law relationship is asserted between them as follows:
β=Kβεγ, (31)
for constants K and γ, which generally depend on the type of the obscurant material 106. Substituting Equation (31) into Equation (30) yields a simplified lidar equation with no explicit dependence on β. For example,
where K=KSKβ is an overall constant depending on the system and the obscurant material 106. And, the exponential factor on the right hand side of Equation (32) represents the cumulative (e.g., two-way) extinction through the obscurant material 106, which is denoted by E as follows:
Afterwards, the lidar equation can be inverted. For example, given measurements of P(r), ε(r) are determined. Then, rearranging Equation (30) results in the following:
The left hand side of Equation (34) is a total derivative that results in:
Moving the constant to the right and integrating both sides yields:
for some constant of integration C. Noting that at r=0, the integral on the right-hand side vanishes, while the left hand side is unity and C=1, such that:
This yields a closed-form solution for a cumulative extinction as follows:
Applying the logarithm and taking derivatives on both sides (or equivalently substituting Equation (37) into Equation (34) and solving for ε yields a closed-form expression for the optical extinction as follows:
Noting that the denominator of Equation (39) is
Equation (39) can be rewritten in terms of E(R), the cumulative extinction at range r=R, as follows:
yielding a “backwards” integration form similar to that provided in Equation (31). However, Equation (40) advantageously accommodates a vanishing P.
Some observations include having power profiles P(r), through the obscurant material 106. Other observations comes from hard target returns, P*(R), measured both in the absence of any obscurant (collected during a calibration phase) and in its presence, where it is part of the power profile P(R).
Hard target returns provide a direct estimate of cumulative extinction through the cloud (i.e., the material 106), given by the ratio between the observed return in the presence of the cloud and in its absence as follows:
The measured power returns are also expected to be polluted by noise {circumflex over (P)}=P+n, which is assumed to be independent and identically distributed (IID) random variables with zero mean and standard deviation σP. Further, it is assumed that during calibration there are enough collected frames to have mapped out the hard target returns in the scene (assumed to be fixed throughout). It is also assumed that scintillation statistics P*i and σP*
Under these assumptions, a regularized inversion can be developed according to a functional of the form:
This functional is formulated as the sum of the squared magnitudes of N(0,1) independent random variables, one for each observation, the total number of which is taken as “N” (e.g., an integer greater than or equal to “1”). As such, the value of D applied to the solution should be close to N. However, the minimizer of D yields a smaller value. The inversion formula attests to the fact that the right-most term in Equation (42) can be made to vanish identically, even with fixed K and γ. The problem is that the presence of noise, combined with the high dimensionality of the solution space, leaves the problem under-determined.
The solution to this lack of determinacy is to augment the problem with regularity conditions, which effectively reduce the dimensionality of the solution space and presumably also capture properties that are desirable in the solution. For example, suppose a second functional B [ε] that measures the “badness” of a solution. Then, the problem may be formulated using both B and D as follows:
minεB[ε]D[ε]≦N(1+k), (43)
for some k>0. This sort of functional may be used for compressed sensing in the presence of noise, in which case the measure of “goodness” is sparsity of representation in some basis set. The informal translation of this problem is: “choose the least bad solution that is sufficiently consistent with the data.” Then, an answer can be recovered by solving Equation (42) under the appropriate circumstances.
One difficulty with Equation (42) is that it is formulated as a constrained minimization problem, which typically requires more complicated methods of solution versus those available for solving unconstrained minimizations. Thus, the problem may be solved as an unconstrained minimization problem as follows:
minελB[ε]+D[ε], (44)
for some λ>0.
Of course, there is also the question of what to choose for B [ε] . In the context of compressive sensing, where the focus is on sparsity, variations on the 1-norm are common. The 1-norm also tends to allow solutions that admit jump-type discontinuities, which is a very desirable property in many instances. In the embodiments disclosed herein, however, cloud densities are generally expected to vary smoothly throughout space. Thus, a regularization term is provided of the form:
B[ε]=∥∇ε∥
2=∫∇ε·∇εdx. (45)
The 2-norm, aside from favoring smooth functions, is also differentiable and tends to admit numerical methods. Regularized inversion as described has a several advantages including denoising the data in a principled way. Another advantage is that the regularizer generally guarantees that ε is smooth, a desirable condition for having isosurfaces that can be easily visualized. The regularizer also effectively provides a more principled version of the “slope method” for parameter fitting.
To implement this numerically, a proximal version of Equation (44) is employed as follows:
λB[u]+D[v]+μ∫(u−v)2dx, (46)
which can be solved alternately minimizing u and v alone. For example, holding v fixed, a u value that minimizes Equation (46) satisfies:
an equation similar to the Poisson equation. Like the Poisson equation, this equation permits a simple Gauss-Seidel style method of solution that is easier to parallelize.
For a fixed u, Equation (47) can be minimized via a gradient descent with a line search. This iteration requires a reasonable initialization. Generally, a reverse integration form of Equation (41) is used to initialize ε. If hard-target data is available for a given line of sight, then the processor 110 can use its estimate for E(R) and E(R)=1 for other (or all) lines of sight to give an underestimate for ε.
When an estimate of the system parameters K and γ are desired, another step can be incorporated into the iteration. For example, the only term in Equation (47) that depends on the system parameters is:
Here, it can be shown that the optimal K satisfies:
γ can be updated as well, but the iteration is non-linear and would generally require a gradient descent search.
As noted earlier, the denominator of Equation (39) is a power of the cumulative extinction. As this extinction becomes small and approaches zero, εbecomes more difficult to determine and is increasingly subject to noise. Thus, a reasonable rule of thumb for assessing the believability of a reconstructed ε is the estimated cumulative extinction to that point. While the cumulative extinction remains sufficiently larger than zero, ε should be reliable. Just how sensitive a solution is to noise depends generally on the numerical methods employed. The solution may also depend on errors in the estimates of K and γ.
The processor 110 may then perform a first-order sensitivity analysis by examining perturbations around the solution. The results may illustrate the effect of noise and error accumulation, but its applicability to measured data may remain suspect, as it would likely involve linearizations that are accurate for sufficiently small perturbations. Real-life perturbations are likely to be large enough to make the neglected nonlinearities significant. Thus, reliable sensitivity generally needs to be based on simulation and/or experimental data.
These data can then be used in conjunction with 2D image data to modify the displayed data. The lidar data are first registered with the 2D electro optic imagery. Then, lidar waveform data was used to measure the 4D (3D +time) obscurant and create an “atmosphere” model. For example, the measured lidar waveform is of the form:
This waveform is then inverted to find the concentration as follows:
From there, the “atmosphere” model and 3D world can be used to generate a corrected electro optic imagery without the obscurant.
As discussed, the systems and methods disclosed herein provide for the identification/imaging of objects being obscured or distorted by distributed materials within the environment. For example, various 2D and 3D sensors (e.g., lidar, radar, sonar, digital cameras, infrared cameras, etc.) can image an object through dust, gas, and/or other obscurants such that the object 107 can be observed or identified. Once the data is collected, the processor 110 generates and maintains a model of surrounding objects and environment that is subsequently used for renderings under differing environmental conditions than were present during the data collection. Applications of this processing are almost boundless. For example, the processor 110 may be used to render imagery for a helicopter pilot so as to remove obscurations by dust and/or other particulates degrading raw image data and/or obscuring the pilots view. Another example includes removing obscurants from a battlefield scene so as to reveal enemy combatants.
The invention can also take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In one embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Furthermore, the invention can take the form of a computer program product accessible from the computer readable medium 606 providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, the computer readable medium 606 can be any apparatus that can tangibly store the program for use by or in connection with the instruction execution system, apparatus, or device, including the computer system 600.
The medium 606 can be any tangible electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Examples of a computer readable medium 606 include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Some examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
The computing system 600, suitable for storing and/or executing program code, can include one or more processors 602 coupled directly or indirectly to memory 608 through a system bus 610. The memory 608 can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices 604 (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the computing system 600 to become coupled to other data processing systems, such as through host systems interfaces 612, or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
This patent application is a non-provisional patent application claiming priority to, and thus the benefit of an earlier filing date from, U.S. Provisional Patent Application No. 62/023,118 (filed Jul. 10, 2014), the entire contents of which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62023118 | Jul 2014 | US |