A traditional laser detection approach is to deploy an optical sensor on a platform and hope that in the event a laser illuminates the platform, the sensor will lie in the footprint of the laser. This is typically known as the direct detection approach, which is appropriate for small platforms and generally used in laser warning applications. Direct detection is impractical for very large platforms where the large number of sensors required to cover the platform would be cost prohibitive or for laser intelligence collection efforts where it is impractical or impossible to position a sensor in the beam path.
Accordingly, there would be significant value in being able to remotely determine the location and propagation direction of a laser beam including a continuous wave (CW) laser beam. This information could provide the location and range of the laser source as well as the intended target.
Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The appearances of the phrases “in one embodiment”, “in some embodiments”, and “in other embodiments” in various places in the specification are not necessarily all referring to the same embodiment or the same set of embodiments.
The embodiments of the method disclosed herein determine the position and orientation of a laser beam propagating through the atmosphere by remote observation of off-axis scattering from aerosols in the beam path using two cameras. New features that are provided by this method are the position of the laser source, the direction of the laser beam, and an indication of the intended target. Unlike previous methods, this method will work for both pulsed and continuous-wave lasers.
A first aspect of one embodiment of method 10 involves a geometric approach. Accordingly, method 10 may begin at step 12, which involves obtaining a first beam image on a focal plane (not shown) of a first camera 110 and a second beam image on a focal plane (not shown) of a second camera 120 from light scattered by ambient atmospheric aerosols in the path of at least part of a laser beam 130. Referring to
It should be recognized that while some embodiments of method 10 utilize two cameras, other embodiments of method 10 may incorporate additional cameras. Further, the cameras are considered to be generic in nature and it is understood that their specific construction and operation might be optimized for a particular laser wavelength.
Step 14 then involves forming a first ambiguity plane 116 for the laser beam observed by the first camera. A projected beam image is formed that represents the first beam image in the projected scene of first camera 110. The part of laser beam B 130 passing through the field-of view (FOV) of camera 110 extends from s1 132 to b1 136. For clarity, the orientation of camera 110 is considered such that the projection plane in front of camera 110 is oriented perpendicular to the line from C1 110 to s1 132. In some embodiments however, other orientations and positions of the projection plane may be used. The projected image of this part of the beam is shown as vector f1 114. Another vector v1 112 is determined that points in the direction from C1 110 to s1 132.
These two vectors are contained within and define a first ambiguity plane 116 for laser beam 130 observed by camera 110 such that the laser beam 130 is known to lie within the first ambiguity plane 116. The information from camera 110 alone is insufficient to determine exactly where laser beam 130 lies within first ambiguity plane 116. A first normal vector n1 118, determined by the vector cross product n1=f1×v1, is normal to first ambiguity plane 116 and therefore to the laser beam B 130.
Step 16 involves forming a second ambiguity plane 126 for the laser beam 130 observed by the second camera. A projected beam image is formed that represents the second beam image in the projected scene of second camera 120. The part of laser beam B 130 passing through the field-of view (FOV) of camera 120 extends from s2 134 to b2 138. For clarity the orientation of camera 120 is considered such that the projection plane in front of camera 120 is oriented perpendicular to the line from C2 120 to s2 134 In some embodiments however, other orientations and positions of the projection plane may be utilized. The projected image of the part of the beam is shown as vector f2 124. Another vector v2 122 is determined that points in the direction from C2 120 to s2 134. These two vectors are contained within and define a second ambiguity plane 126 for laser beam 130 observed by camera 120. A second normal vector n2 128, determined by the vector cross product n2=f2×v2, is normal to second ambiguity plane 126 and therefore to the laser beam B 130.
Step 18 then involves determining the intersection of first ambiguity plane 116 and second ambiguity plane 126. It is understood that because ambiguity planes 116 and 126 are infinite, the intersection line of ambiguity planes 116 and 126 is an infinite line. The laser beam B 130 is uniquely located within this intersection. As shown in
In some embodiments, step 18 involves determining the orientation of the intersection and determining at least one of a point s1 in the first ambiguity plane corresponding to the beginning of the laser beam within the first ambiguity plane and a point s2 in the second ambiguity plane corresponding to the beginning of the laser beam within the second ambiguity plane. The step of determining an orientation of the intersection includes determining a first normal vector normal n1 118 to the first ambiguity plane 116, a second normal vector n2 128 normal to the second ambiguity plane 126, and a unit beam vector u formed by the cross product of the first normal vector n1 118 and the second normal vector n2 128. The direction of laser beam vector B 130 will therefore be parallel to a unit-vector u formed by
u=n1×n2/|n1×n2|. (1)
The position of at least part of the laser beam B 130 is determined by finding, at step 20, at least one of the position of points s1 132 and b1 136 corresponding the end points of the part of laser beam B 130 passing through the field-of view (FOV) of camera 110 and of points s2 134 and b2 138 corresponding the end points of the part of laser beam B 130 passing through the field-of view (FOV) of camera 120. The point s1 132 can be calculated as
s1=C1+v1[(C2−C1)·n2]/[v1·n2] (2)
and the point s2 134 can be calculated as
s2=C2+v2[(C1−C2)·n1]/[v2·n1] (3)
A similar calculation could be used to determine the position of b1 136 and b2 138.
These end points are not necessarily the position of laser source S, since it has not been determined that the laser beam source S lies within the FOV of either camera 110 or 120. Accordingly, step 22 involves determining if any of these end points correspond to the position of the laser source S. Other information in the scene of each camera will generally be available to determine if the source or perhaps a target has been imaged by cameras 110 and 120. For example, if either of the end points s1 132 or b1 136 lie on the perimeter of the FOV of camera 110 they would not be considered the position of the source S and similarly for camera 120. In this case, the FOV of the camera could be adjusted to encompass a different part of the beam. Assuming there are no objects in the scene to obstruct the laser beam source S, it will generally be very much brighter than all other points along the beam due to scattering from optical elements in the laser transmitter. The diagram shown in
The following discussion is premised upon the laser beam source S having been imaged by at least one of cameras 110 and 120. Method 10 may then proceed along flow path 23 to step 24, where a camera-source plane 140 is determined. It is evident that the configuration of the triangle formed by C1 110, S 132, and C2 120 can be completely determined, such with the plane of the triangle referred to as the camera-source plane 140. In particular, the distances R1 from S 132 to C1 110, R2 from S 132 to C2 120, and the angle φC formed between the line from S 132 to C1 110 and the line from S 132 to C2 120 can be determined. A common three-dimensional coordinate system based on the plane of this triangle with the origin at S 132 is used and all vectors are considered in such a coordinate system.
Referring to
Step 26 of method 10 involves forming a projection 250 of the unit beam vector 240 from S 230 onto the camera-source plane 260. Step 28 then involves determining the beam elevation angle ξ of the beam unit vector 240 relative to camera-source plane 260. Step 30 involves determining a first beam azimuth angle φ1 corresponding to the angle of the projection of unit beam vector 240 relative to the line from S 230 to C1 210. Step 32 then involves determining a second beam azimuth angle φ2 corresponding the angle of projection of unit beam vector 240 relative to the line from S 230 to C2 220. Angle φ2 may be determined by φ2=φ1−φC, where these angles are taken to be positive in the counter-clockwise direction.
A problem with the determination of the beam direction based on Eq. (1) will arise if the elevation angle ξ of the beam is small compared to both φ1 and φ2. In this situation, as ξ approaches zero, ambiguity planes 116 and 126 shown in
When the beam elevation ξ is close to zero, additional steps based on the variation of intensity of the beam images in the cameras 110 and 120 may be needed to help determine the beam azimuth angles. Referring to
if multiple scattering is neglected (see “Off-axis detection and characterization of laser beams in the maritime atmosphere,” Hanson et al., Appl. Opt. 50, 3050-3056 (2011)). Here P0 is the initial laser power and R 340 is the distance from source S 310 to camera 330. β(ψ+θ) is the volume scattering function due to aerosols or other particles in the beam path and the beam angle ψ is the angle of the beam with respect to the line from source to camera. It is assumed that the atmosphere is uniform and therefore the scattering function is independent of position. The transmission T(z,r) of light along the path of the beam from source S 310 to the scattering point p and from that point to the camera C is given by
T(z,r)=exp[−α(z+r)] (5)
Where α=(a+b) is the extinction coefficient due to absorption plus scattering and the distances z and r are implicit functions of ψ and θ according to
It is valid to neglect multiple scattering if b(r+z)<<1 where b is the component of α due to scattering alone.
The angular dependence of radiance in Eq. (4) on scattering angle θ is primarily due to the scattering function β(ψ+θ) which is generally a strongly decreasing function of angle for angles less than ˜90°. The details of the scattering function β are not generally known. However if the ratio of the radiance at both cameras, such as cameras 110 and 120 shown in
The beam angles ψf1 and ψ2 relative to the two cameras are related to the beam elevation angle ξ and azimuthal angles according to
cos(ψ1)=cos(ξ)cos(φ1) (9)
and
cos(ψ2)=cos(ξ)cos(φ1−φC). (10)
The distances R1 and R2 and the beam elevation ξ are determined from the geometric method and therefore, apart from the extinction term, Eq. (8) only involves one unknown, φ1.
The measured signal from each camera along the beam image can be converted to a value Y proportional to radiance L by taking into account various specific parameters of the cameras including the response and orientation of the camera with respect the direction of the scattering. It is only important that the proportionality to actual radiance be the same for each camera. Therefore we can replace the left side of Eq. (8) with measured quantities,
An error function χ2 that depends parametrically on φ1 and α is formed by summing over camera pixels i along the beam images,
where z and r are implicit functions of θ and φ1. An estimated azimuthal beam angle φ1 is obtained by minimizing χ2 with respect to φ1 for a given α. In many cases the product of α and the difference in path length will be much less than unity and therefore the estimate of beam angle will not be sensitive to α. A best estimate φR of beam azimuth angle φ1 based on scattered radiance is determined using the best estimate of α based on weather conditions including meteorological range or visibility measurements.
In some embodiments, the final determination of beam azimuth angles in steps 30 and 32 is made using information from both the geometric analysis and the radiance analysis. Each analysis will have some uncertainty, for example from errors in camera position and orientation and measured radiance. In particular, the uncertainty σG and σR in beam azimuth angle can be estimated for the geometric and radiance analysis respectively. The final determination of beam azimuth angle might be the weighted average of the azimuth angles φG and φR obtained from the geometric and radiance analysis respectively,
Some or all of the steps of method 10 may be stored on a computer-readable storage medium, such as a non-transitory computer-readable storage medium, wherein the steps are represented by computer-readable programming code. The steps of method 10 may also be computer-implemented using a programmable device, such as a computer-based system. Method 10 may comprise instructions that may be stored within a processor or may be loaded into a computer-based system, such that the processor or computer-based system then may execute the steps of method 10. Method 10 may be implemented using various programming languages, such as “Java”, “C” or “C++”.
Various storage media, such as magnetic computer disks, optical disks, and electronic memories, as well as non-transitory computer readable storage media and computer program products, can be prepared that can contain information that can direct a device, such as a micro-controller or processor, to implement method 10. Once an appropriate device has access to the information and programs contained on the storage media, the storage media can provide the information and programs to the device, enabling the device to perform the above-described systems and/or methods.
For example, if a computer disk containing appropriate materials, such as a source file, an object file, or an executable file, were provided to a computer, the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods, and coordinate the functions of the individual systems and/or methods.
Cameras 430 and 440 are connected to a processor 450, which receives input from cameras 430 and 440 and performs the various calculations and determinations of method 10 as discussed above. Processor 450 may include computer-implementable instructions represented by computer-readable programming code stored therein, with such instructions configured to perform the steps of method 10. Processor 450 may be any device configured to perform computations on the input received from cameras 430 and 440. For example, processor 450 may be a commercially available computing device that has been modified to include programming instructions therein to allow processor 450 to perform the steps of method 10.
A data storage device 460 may connected to processor 450. Data storage 460 may contain data and/or instructions stored therein for use by processor 450 in performing some or all of the steps of method 10. As an example, data storage device 460 may be any standard memory device, such as EEPROM, EPROM, RAM, DRAM, SDRAM, or the like. Input data, as received from cameras 430 and 440, may be stored in data storage 460 in various ways, such as in a table format or other format as recognized by one having ordinary skill in the art. Processor 450 may be configured to provide an output to display 470. In some embodiments, such output may include the position of the laser beam source, the direction of propagation of the laser beam, and an indication of the intended target of the laser beam. Display 470 may comprise any commercially available display. As an example, display 470 may be liquid crystal display.
Many modifications and variations of the Method for Atmospheric Laser Beam Detection Using Remote Sensing of Off-Axis Scattering are possible in light of the above description. Within the scope of the appended claims, the embodiments of the subject matter described herein may be practiced otherwise than as specifically described. The scope of the claims is not limited to the implementations and the embodiments disclosed herein, but extends to other implementations and embodiments as may be contemplated by those persons having ordinary skill in the art.
The Method for Atmospheric Laser Beam Detection Using Remote Sensing of Off-Axis Scattering is assigned to the United States Government and is available for licensing for commercial purposes. Licensing and technical inquiries may be directed to the Office of Research and Technical Applications, Space and Naval Warfare Systems Center, Pacific, Code 72120, San Diego, Calif., 92152; voice (619) 553-5118; email ssc_pac_T2@navy.mil; reference Navy Case Number 101726.
Number | Name | Date | Kind |
---|---|---|---|
8797550 | Hays et al. | Aug 2014 | B2 |
8810796 | Hays et al. | Aug 2014 | B2 |
20050219554 | Tobiason et al. | Oct 2005 | A1 |
20070211258 | Lee et al. | Sep 2007 | A1 |
20120050750 | Hays et al. | Mar 2012 | A1 |
20120169053 | Tchoryk et al. | Jul 2012 | A1 |
Entry |
---|
Hanson, Frank E., et al., “Off-Axis Detection and Characterization of Laser Beams in the Maritime Atmopshere”, Applied Optics, vol. 50, issue 18, pp. 3050-3056, 2011. |