LIDAR System Comprising Large Area Micro-Channel Plate Focal Plane Array

Abstract
A sensor system is provided comprising a precision tracking sensor element and one or more acquisition sensor elements. The acquisition sensor elements may be mounted on a rotating base element that rotates about a first axis. The precision tracking sensor elements may be mounted on a hinged or pivoting element or gimbal on the housing and provided with drive means to permit a user to selectively manually or automatically direct it toward a scene target of interest detected by the acquisition sensor elements. At least one of the imaging elements in the precision tracking sensor or acquisition sensors is stacked micro-channel plate focal plane array element.
Description
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH AND DEVELOPMENT

N/A


BACKGROUND OF THE INVENTION

1. Field of the Invention


The invention relates generally to the field of electronic imaging systems.


More specifically, the invention relates to a LIDAR system comprising a large area focal plane array (“EPA”) having a multi-layer, micro-channel plate (MCP) electronic module which may, in an alternative embodiment, comprise a collimating micro-lens and Risley prism beam steering or receiving optics.


2. Description of the Related Art


Miniaturized electro-optical sensors mounted on small unmanned air vehicles (“UAV's”) or unmanned aerial systems (“UAS's”) are potentially exploited by an adversary to obtain information on a military operation and its states of activity and readiness which may degrade the effectiveness of the operation.


What is needed is a cost-effective, deployable system to detect, track, identify, and enable interdiction of an adversary's UAS prior to the adversary having the ability to observe and report critical dispositions and activities.


As an emerging threat, small UAS's pose a difficult detection systems problem. First, the use of nonmetallic materials and particular shapes and geometries of an adversary's UAS results in poor detection performance of prior art, wide-area search radars due in part to low radar cross-section or “RCS” as has been confirmed in field tests.


Secondly, prior art acoustic search and detection approaches are likewise significantly impaired where battery-operated electric motors are used on a UAS for propulsion. A third detection difficulty is that neither of the radar or acoustic techniques provides inherent target ID capabilities.


Existing passive, electro-optical sensors can perform surveillance and target ID, but only at limited ranges and perform best only in daylight. Finally, the relatively small UAS infrared signatures severely limit detection ranges of cost-effective IR sensors.


An inherent “asymmetric advantage” for adversary forces lies in the fact that a simple UAS can be small, made of nonmetallic materials and can fly quietly using battery-powered electric motors. The use of broadly available IR sensors gives the adversary UAS a 24/7 operating capability. The relatively low cost of such systems ultimately imposes a requirement that the detection solution must also be cost-effective.


Thus the emerging threat is a small, difficult to observe, inexpensive vehicle that is capable of 24/7 operation.


The disclosed sensor system of the invention provides the means to deny an adversary the reconnaissance capabilities available in a UAS carrying readily available, passive visible and infrared sensors that could be deployed by enemy forces.


The disclosed invention is responsive to threat observables and operating timelines, supportive of all military response functions of wide area search, detect, track, ID, and interdiction to maximize “keep out” zones and accomplishes the needed detection solution in a fashion that is rugged, broadly operable, easily deployable and cost effective. A “keep out” zone of ≧6 Km diameter can be enforced for very small (0.1 m2) threat UAS of the WASP class using the invention.


Applicant's sensor system exploits high pulse rate, eye-safe SWIR fiber lasers, agile beam steering and very high sensitivity, large area focal plane arrays (FPAs) with stacked read-out integrated circuits (ROICs) using a multi-FPGA, high-throughput, low-power signal processor to provide a wide area, moderate resolution search volume for initial acquisition and tracking of an intruder UAS followed by a hand-over to a higher resolution 3D imaging LIDAR for target ID and precision tracking to support interdiction.


As earlier discussed, as an emerging threat, small UAS have the potential for posing a difficult systems problem. Use of nonmetallic materials and particular shapes can result in making traditional wide area search radars perform poorly due to low RCS. This problem is confirmed in U.S. Army sponsored field tests. Acoustic search and detection approaches are likewise significantly impaired if battery operated electric motors are used for propulsion. Neither of these techniques provides inherent target ID capabilities. Passive, electro-optical sensors can perform surveillance and target ID, but at limited ranges and only in daylight. Small UAS infrared signatures severely limit detection ranges of cost effective sensors.


A new detector design paradigm is needed. The disclosed invention enables that paradigm in a practical, cost-effective solution to the above deficiencies in the prior art and takes advantage of high performance micro-channel plate FPA imager technology.


BRIEF SUMMARY OF THE INVENTION

A sensor system is provided comprising a precision tracking sensor element and one or more acquisition sensor elements. The acquisition sensor element preferably has a larger field of view with corresponding lower resolution than the precision tracking sensor element which preferably has a relatively small field of view for each of its independent active and passive imaging photo-detector elements but having higher resolution.


The acquisition sensor elements may be mounted on a rotating base element that rotates about a first axis to provide 360 degree detection about the first axis. The precision tracking sensor elements may be mounted on a hinged or pivoting element or gimbal on the housing and provided with automatic/computerized or manually controlled electronic drive means or both to permit a user to selectively manually or automatically direct the precision tracking sensor element toward a scene or target of interest detected by the acquisition sensor elements.


In a first aspect of the invention, a sensor system is provided comprising a first precision tracking element comprising imaging means for providing an electromagnetic illumination beam. The electromagnetic beam may have a predetermined imaging wavelength. The sensor system may have at least one precision tracking photo-detector element, an acquisition sensor element comprising at least one acquisition photo-detector element, wherein at least one of the photo-detector elements comprises an electronic module. The electronic module may comprise a stack of layers wherein the layers comprise a micro-lens array layer having at least one individual lens element for providing a beam output, a photocathode layer for generating a photocathode electron output in response to a predetermined range of the electromagnetic spectrum, a micro-channel plate layer comprising at least one micro-channel for generating a cascaded electron output in response to the photocathode electron output, and, a readout circuit layer for processing the output of the micro-channel.


In a second aspect of the invention the acquisition sensor element is mounted on a rotating base element that rotates about a first axis.


In a third aspect of the invention, the precision tracking sensor element is movably mounted to a housing to permit the selective direction toward a predetermined scene of interest.


In a fourth aspect of the invention, the sensor system further comprises at least one Risley prism assembly.


In a fifth aspect of the invention, the sensor system comprises Gray code counter circuit means.


In a sixth aspect of the invention, a sensor system is provided comprising a first precision tracking element comprising imaging means for providing an electromagnetic illumination beam having a predetermined imaging wavelength. The precision tracking element comprises scanning means for scanning the illumination beam on a target, a parabolic reflector element, a hyperbolic reflector element, beam-splitting means, a first precision tracking photo-detector element responsive to a predetermined first range of the electromagnetic spectrum having a first active field of view. The precision tracking element comprises a second precision tracking photo-detector element responsive to a predetermined first range of the electromagnetic spectrum having a first passive field of view. The sensor system further comprises at least one acquisition sensor element comprising an acquisition photo-detector element having a second field of view. At least one of the photo-detector elements in the precision tracking sensor element or in the acquisition sensor element or both comprises an electronic module comprising a stack of layers wherein the layers comprise a micro-lens array layer comprising at least one individual lens element for providing a beam output, a photocathode layer for generating a photocathode electron output in response to a predetermined range of the electromagnetic spectrum, a micro-channel plate layer comprising at least one micro-channel for generating a cascaded electron output in response to the photocathode electron output, and, a readout circuit layer for processing the output of the micro-channel.


In a seventh aspect of the invention, the parabolic reflector element and the hyperbolic reflector element are configured as a Cassegrain reflector telescope assembly.


In an eighth aspect of the invention, the illumination beam is projected through and incoming electromagnetic radiation is received through a common aperture.


In a ninth aspect of the invention, at least one of the first and second precision tracking photo-detector elements comprises an electronic module comprising a stack of layers wherein the layers comprise, a micro-lens array layer comprising at least one lens element for providing a beam output, a photocathode layer for generating a photocathode electron output in response to a predetermined range of the electromagnetic spectrum, a micro-channel plate layer comprising at least one micro-channel for generating a cascaded electron output in response to the photocathode electron output, and, a readout circuit layer for processing the output of the micro-channel.


In a tenth aspect of the invention, the readout circuit layer comprises a first sub-layer and a second sub-layer that are electrically coupled by means of a through-silicon via.


In an eleventh aspect of the invention, the sensor system further comprises a thermoelectric cooling layer.


In a twelfth aspect of the invention, the beam output of the lens element is substantially collimated.


In a thirteenth aspect of the invention, the readout layer is comprised of a set of readout sub-layers comprising a capacitor top metal and analog preamp sub-layer, a filtering and comparator sub-layer and a digital processing sub-layer.


In a fourteenth aspect of the invention, the predetermined ranges of the electromagnetic spectrum comprise ranges selected from the ultraviolet, visible, near-infrared, short-wave infrared, medium-wave infrared, long-wave infrared, far-infrared and x-ray ranges of the electromagnetic spectrum.


In a fifteenth aspect of the invention, the micro-channel plate is comprised of at least one micro-channel having a diameter of less than about 10 microns.


In a sixteenth aspect of the invention, the micro-channel plate is comprised of at least one micro-channel having a diameter of less than about five microns.


In a seventeenth aspect of the invention, the acquisition sensor is mounted on a rotating base element that rotates about a first axis.


In an eighteenth aspect of the invention, the precision tracking sensor element is movably mounted to a housing so as to be selectively directed toward a predetermined scene of interest.


In a nineteenth aspect of the invention, the sensor system further comprising at least one Risley prism assembly.


In a twentieth aspect of the invention, the Risley prism embodiment further comprises Gray code counter circuit means.


These and various additional aspects, embodiments and advantages of the present invention will become immediately apparent to those of ordinary skill in the art upon review of the Detailed Description and any claims to follow.


While the claimed apparatus and method herein has or will be described for the sake of grammatical fluidity with functional explanations, it is to be understood that the claims, unless expressly formulated under 35 USC 112, are not to be construed as necessarily limited in any way by the construction of “means” or “steps” limitations, but are to be accorded the full scope of the meaning and equivalents of the definition provided by the claims under the judicial doctrine of equivalents, and in the case where the claims are expressly formulated under 35 USC 112, are to be accorded full statutory equivalents under 35 USC 112.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS


FIG. 1 depicts a focal plane array photo-detector element comprising a micro-channel plate of the invention.



FIG. 1
a depicts a block diagram of the major circuit elements of the layers or tiers or the photo-detector element of FIG. 1.



FIG. 2 depicts an individual pixel element of the photo-detector element of FIG. 1 comprising a micro-lens and individual micro-channel.



FIG. 3 depicts a sensor system of the invention having a plurality of acquisition sensor elements on a rotating base element and a precision tracking element moveably mounted on the upper surface thereof.



FIG. 4 depicts a preferred embodiment of the precision tracking element of the sensor system of the invention in a multiple wavelength, multiple photo-detector, Cassegrain telescope configuration.



FIG. 5 is a flow chart showing schematically the major elements of a calculation for the signal-to-noise ratio of the sensor system of the invention.



FIG. 6 is a graph showing system performance in a clear atmosphere.



FIG. 7 is a graph showing system performance in a light rain.



FIGS. 8
a and 8b depict two optical ray paths in a Risley prism set with the lenses in the prism set oriented in different positions.



FIG. 9 depicts a further preferred embodiment of the sensor system of the invention having a Risley prism scanning and receiving assembly.





The invention and its various embodiments can now be better understood by turning to the following detailed description of the preferred embodiments which are presented as illustrated examples of the invention defined in the claims.


It is expressly understood that the invention as defined by the claims may be broader than the illustrated embodiments described below.


DETAILED DESCRIPTION OF THE INVENTION

Turning now to the figures wherein like references define like elements among the several views, Applicant discloses a sensor system comprising a large area micro-channel plate focal plane array (“FPA”).


The device of the invention preferably comprises a fiber laser operating in the short-wave infrared spectrum or SWIR (i.e., from about 0.9 to about 1.7 microns), eye-safe spectral band of 1.55 microns which beneficially provides the very high pulse rates (˜100's of KHz) needed to provide 360° surveillance coverage at ranges greater than 5 Km sufficient to allow timely detection, tracking, identification, and interdiction of a UAS.


Coupled with the use of fiber lasers, the invention comprises very large photo-detector focal plane arrays and related readout integrated circuits (“FPA” and “ROIC” respectively) configured in a stacked microelectronic module. In order for the relatively low “per pulse” energies of fiber lasers (<100 μwatts) to detect small adversary UAS at multiple kilometer ranges, large numbers of detectors with very high sensitivity (i.e., a few photons) and high dynamic range (>80 dB) are required and are provided in the invention.


Turning to the figures wherein like numerals define like elements among the several views, a multi-layer micro-channel plate assembly and module comprising a micro-lens layer structure for use in an imaging system and at sensor system are disclosed.


The photo-detector FPA of the device having such capabilities is illustrated in FIGS. 1 and 1a utilizing a high performance, stacked photo-detector element in the form of an FPA micro-channel plate module 1. Module 1 herein is interchangeably referred herein to as a photo-detector element 1.


Using the micro-channel plate module 1 of the invention, a relatively small photon arrival event will result in a large number of output electrons (i.e., a cloud of electrons) and provide increased photo-detector performance.


Turning to the preferred embodiment of the micro-channel plate module 1 of the invention shown in FIGS. 1 and 1a, micro-channel plate technology and readout integrated circuit (“ROIC”) technology are integrated into a three-dimensional, stacked plurality of microelectronic layers in the form of a stacked electronic photo-detector element or module to provide a high-circuit density structure for use in imaging applications.


Module 1 comprises a stack of microelectronic integrated circuit layers, each layer of which may comprise a plurality of sub-layers.


A window element 5 is provided in the preferred vacuum package enclosure 6 encasing module 1 for the receiving of electromagnetic radiation (i.e., reflected or emitted light or electromagnetic energy) from a scene of interest. Window element 5 may be comprised of a fused silica or sapphire material suitable for transmitting a predetermined received wavelength selected by the user.


Incident electromagnetic radiation from the scene of interest is received through window 5 by the micro-lens array layer 10.


In the illustrated embodiment, micro-lens array 10 comprises a plurality of individual lens elements 10′.


Individual lens elements 10′ may further each comprise a plurality of lens sub-elements such as a biconvex lens sub-element 10a in optical cooperation with a plano-concave lens sub-element 10b depicted in FIG. 2. Individual lens elements 10′ of micro-lens array 10 receive incident radiation 15 from the scene and collect and collimate it to provide a focused and collimated micro-lens array output beam 15′.


Micro-lens array 10 may comprise a two-dimensional array of individual lens elements 10′ wherein each lens element has a diameter of about 0.05 to about 3 mm and a focal length of about 0.2 or 20 mm or may be provided to have a tunable focal length.


A photocathode layer 20 is provided and has an input surface 20a and an output surface 20b. Photocathode layer 20 produces an electron output in response to an input of a predetermined range of the electromagnetic spectrum received from the lens element 10′. In a preferred embodiment, the photocathode layer 20 comprises an indium gallium arsenide material or InGaAs and is responsive to electromagnetic radiation in the infrared spectrum or IR.


The collimated micro-lens beam output 15′ is incident upon the input surface 20a of photocathode layer 20 and produces an electron output in response thereto. Because the photon input to photocathode layer 20 is substantially collimated by the plurality of multiple lens elements 10′ of micro-lens array layer 10, the electron output of photocathode layer 20 is substantially focused and defined so as to be received within individual channels 25 of micro-channel plate assembly layer 30 rather than striking the inactive area of the micro-channel plate surface.


The diameter of the individual lens elements 10′ is preferably greater than that of the diameter of channels 25 in micro-channel plate 30 in order to capture and redirect incident radiation from the scene that would ordinarily strike the inactive micro-channel plate array surface and instead is directed into the individual channels 25.


Photocathode layer 20 serves to convert input photons of a predetermined frequency or wavelength from a scene of interest into output electrons which exit the photocathode and are received by channels 25 disposed through the thickness of micro-channel plate 30.


Photocathode 20 comprises a charged electrode that when struck by one or more photons, emits one or more electrons due to the photoelectric effect, generating an electrical current flow through it.


The channels 25 are disposed in the micro-channel plate structure material such that they substantially parallel to each other and in preferred embodiments are defined at a predetermined angle relative to the micro-channel input surface and micro-channel output surface of micro-channel plate 30.


As is known in the field of micro-channel plate technology, channels 25 function as electron multipliers acting as pixels when under the presence of an electric field. In operation, an electron emitted from photocathode layer 20 is admitted to the input aperture of channel 25 of micro-channel plate layer 30. The configuration of channels 25 assures the electrons will strike the interior wall or walls of channels 25 because of the angle at which the channels 25 are disposed with respect to planar surface of the micro-channel plate layer 30.


In operation, the collision of an electron with the interior walls of channel 25 causes an electron “cascading” effect, resulting in the propagation of a plurality of electrons through the channel and toward micro-channel layer output aperture.


The cascade of electrons exits the micro-channel layer output as an electron “cloud” whereby the electron input signal is amplified (i.e., cascaded) by several orders of magnitude to generate an amplified electron output signal.


Design factors affecting the amplification of the electron output signal from micro-channel plate 30 include electric field strength, the geometry of channels 25 and the micro-channel plate device material.


Subsequent to the electron output signal exiting channel 25, the micro-channel plate 30 recharges during a refresh cycle before another electron input signal is detected as is known in the field of micro-channel plate technology.


The amplified electron output signal from channel 25 comprising a cascaded plurality of electrons is received by an electrically conductive member 40 that is electronically coupled with appropriate readout circuitry.


The electrical coupling of the layers or sub-layers or both in the readout circuitry layer may be such as by electrically conductive through-silicon vias 45 disposed within or between the sub-layers.


The photocathode layer 20 output surface is disposed proximal and coplanar with micro-channel layer 30 input surface whereby when a photon strikes photocathode layer 20 input surface, one or more electrons are emitted thereby and enter a channel 25 disposed through the micro-channel plate, generating an electron cascade effect and defining a photon arrival event. The electrons generated by the photon arrival event are processed by elements of the stacked assembly and the micro-channel plate output is processed using suitable circuitry whereby an image is produced.


The photocathode and micro-channel plate of the invention are available from Hamamatsu or Photonis (Burle) and are preferably integrated as a stack of layers with the ROIC. In one embodiment, the micro-channel plate may be optimized using atomic layer deposition (ALD) films for conductive secondary electron emission, photocathode and stabilization layers to simplify integration.


The three-dimensional stacked microelectronic architecture of the invention permits considerably lower detector size in part due to the use of small circuits and through-silicon-via (TSV) technology to electrically couple the layers of the invention while maintaining high frame rates and five micron pixel sizes.


The invention may comprise a plurality of stacked and interconnected sub-layers in the form of integrated circuit chips that define a readout circuit layer 100. In the illustrated embodiment, readout circuit layer 100 comprises a plurality of sub-layers, here illustrated in FIGS. 1 and 1a as sub-layers 100A-D.


Tier 1 shown as sub-layer 100A may comprise preamplifier circuitry for noise reduction, improved signal-to-noise ratio, preprocessing and conditioning the output of the micro-channel layer 30 and may comprise a capacitor top metal and analog preamp circuitry.


Tier 2 shown as sub-layer 10013 may comprise one or more differentiator circuits having an output received by a zero-crossing comparator with an addressable record input and may comprise filtering and comparator circuitry.


Tiers 3 and 4 shown as sub-layers 100D and 100D respectively comprise digital processing circuitry.


Sub-layer 100D may comprise a rescuable Gray code counter circuit with an input into a first memory register.


Sub-layer 100D may comprise a second memory register or latch and multiplexing circuitry for multiplexing the output of the module to external circuitry.


The sub-layers 100A-D may be electrically coupled using through-silicon via 45 technology, wire-bonding, side-bussing using metallized T-connect structures or equivalent electrical coupling means used to electrically couple stacked microelectronic layers.


A thermoelectric cooler layer 200 may be provided in the module for temperature stabilization of the module and stacked circuit layers.


The module 1 may further be provided in the form of a pin grid array package interface 300 for electrical connection to external circuitry such as using a socketed connection.


The sensor system of the invention preferably uses the illustrated four tier stacked approach for the physical implementation of the LIDAR focal plane as depicted in FIG. 3 as is more fully discussed below. The stacking design rules, based on Tezzaron's Through-Silicon-Via (TSV) technology, allow the real estate necessary to implement the silicon electron detector, analog preprocessing and digital time of arrival post processing all within about a 12 micron unit cell.


Ninety nanometer silicon design rules allow 2 GHz signal speed and approximately 80 transistors per tier. In one embodiment, 2048 elements in the long direction result in an active area die width of about 24.5 mm.


Echo photons from a laser-illuminated target are reflected from a target surface and strike the SWIR-responsive photocathode of module 1. The photocathode emits electrons with a quantum efficiency of about 0.15. The electrons are amplified by the micro-channel plate 30 with a gain of about 100,000. Thus the sensitivity of the focal plane module 1 is about seven photons. The micro-channel plate 30 can be delineated into channels 25 approaching five microns for very fine resolution amplification. The gain of the micro channel plate 30 can be increased by increasing time from the Tzero timing signal, providing low gain for close objects and high gain for far objects, extending the non-distorting dynamic range of the system.


The amplified electrons are captured by the top metal of a capacitor in the unit cell on tier 1. The voltage built up on the capacitor is sensed by a preamp in the tier 1 active layer connected by a through silicon via or TSV 45. Each TSV 45 for each unit cell takes up about 11% of the real estate, leaving approximately space for about 60 to 70 transistors per tier. The memory latch or register requires approximately nine transistors per bit, thus two tiers are adequate for 10-12 bits.


The readout circuit electrical architecture is based on simple LIDAR time of flight clocking concept and is depicted in the block diagram of FIG. 1a. A 10-bit clock, located outside the unit cell, is started counting by the Tzero pulse generated at the laser pulse signal. This counter feeds a 10-bit memory latch or register located inside each unit cell. When the echo is detected, the memory latch or register is triggered to capture the current count of the counter. The counter increments in a Gray code fashion to assure that only one bit is changing per clock cycle, saving power.


The echo pulse is integrated on the capacitor in tier 1. After simple buffering, the signal is turned back into a pulse via a high pass filter that acts as a differentiator. A second high pass filter changes the waveform again to a signal that passes through zero. The final analog component is a comparator that changes state from 0 to 1 by a signal that crosses through zero. Tier 1 in the illustrated embodiment is comprised of an electron detection capacitor, simple buffer amp and reset switch.


Tier 2 contains the three analog components; two high pass filters and comparator.


Tier 3 and Tier 4 contain the memory registers or latches. The memory is sized to save up to 1024 range bins of data. The high pass filters and zero cross comparator will yield a range resolution dependent on the clocking frequency of the 10-bit Gray code counter regardless of the pulse width of the laser. A 2 GHz master clock will result in 7.5 cm range resolution. The double differentiated waveform will also cross zero at the peak of the echo return.


The four-tier stack of FIG. 1a may comprise an 8-bit memory latch or register in the form of a Gray code memory latch or register in one or more of the unit cells.


A prior art LIDAR ROIC contained a FIFO that captured the state of the comparator, converting time of return into digital ones within the FIFO. This technology stored multiple returns from a single cell but required significant real estate and consumed significant power.


The circuit design topology of the invention is to fit each unit cell with one or two memory-type latches. The latch is fed with a counter. When the comparator changes states, the latch captures the numerical value of the counter at that instant. This represents a time-to-digital converter.


The ROIC may be provided with only one counter for the entire focal plane of the invention and only one latch per unit cell. In the instant memory latch of the invention, the counter counts in Gray code instead of binary code and as a result only one bit will ever change per clock cycle. This provides a significant power-saving feature.


An example of decimal, binary and Gray code count is














Dec
Gray
Binary







0
000
000


1
001
001


2
011
010


3
010
011


4
110
100


5
111
101


6
101
110


7
100
111









The depth of the latch is measured in bits. A 10-bit latch can store up to 1024 counts. For a clock that runs at 2 GHz, each count is 7.5 cm. Thus, the total range gate depth is approximately 75 meters.


This type circuit beneficially fits into very small unit cells. As depicted in FIG. 3, a four tier stack is illustrated that contains two digital tiers each with 4-bit latch. The unit cell size here is as small as 7.5 microns.


Each latch bit is 6-8 transistors, i.e., 64 transistors for an 8-bit latch. In the FIFO approach of the prior art ROIC, the number of transistors was about 6000 and each was required to change state with every return laser echo.


A key component of the LIDAR system is the fiber laser, such as is available from Aculight. This form of laser operates with a very fast pulse rate, up to >100 KHz, but each pulse only contains 100 micro-Joules. Operating in this manner, the laser is one of the most efficient eye safe lasers available. In order to obtain the maximum range, the beam spread of the laser is confined to 28×28 pixels. The beam is then rapidly stepped to the next 28×28 pixel location in the photo-detector element 1 after each pulse. The laser pulse is optically scanned due to its small aperture, 1 cm; the photo-detector element 1 is electrically scanned due to its relatively large optical aperture, 10 cm.


In this mode of operation, only a small percentage of the photo-detector element 1 is operating at any one time, permitting the addressable record feature to power manage the focal plane for low power operation. The multiplexer and output sections of the photo-detector element 1 are always powered providing a 10-bit parallel output.


The ROIC for this counter-UAV application-specific design dissipates less than 1.5 W which may require a thermoelectric or TE cooler 200 to stabilize the temperature of the ROIC stack.


The sensor system 400 of FIG. 3 comprises a rotating or fixed base member 410 having one or more acquisition sensor elements 420 disposed thereon having a first field of view. An exemplar aperture for the acquisition sensor elements 420 may be 10 cm having a first field of view of about 30 degrees elevation and about 0.23 degrees azimuth.


Turning to FIG. 3, sensor system 400 further comprises a precision tracking sensor element 430 having a second field of view. An exemplar aperture for the precision tracking sensor element 430 may be 20 cm having a second field of view for an active imager of about 0.11 degrees and a field of view for a passive imager of about 0.57 degrees.


In a preferred embodiment, the second field of view of the precision tracking sensor element 430 is greater than the first field of view of the acquisition sensor elements 420. Precision tracking sensor element 430 may be rotatably mounted on a hinge or gimbal 435 affixed to housing 437 to permit manual or automatic direction of the precision tracking sensor objective lens or aperture 440 toward a target. The precision tracking sensor element 430 or acquisition sensor element 420 or both comprise a photo-detector element 1 which are referred to herein as a precision tracking photo-detector element or elements and an acquisition sensor photo-detector element respectively.


Precision tracking sensor element 430 may comprise a first and second precision tracking photo-detector element for use in providing independent outputs from an optical beam-splitting element.


Further enhancing the performance of the sensor system 400 of the invention is the use of wide field-of-view optics: Rapid and large volume scanning requirements capable of detecting small UAS are met with wide FOV sensors which minimize mechanical requirements.


With respect to FIG. 4a, the precision tracking sensor system 430, here shown with a Risley prism, is controlled by the mission processor. Simple commands are decoded into the proper control by the sensor controller. Waveform generation takes place in the control function directly behind the FPA.


Output data from the sensor are pre-processed and stored in local memory for the mission process to access. The scanning control has dedicated interface that receives data from an IMU in addition to desired aiming commands.


Fiber lasers have been in the development for several years and have been making an impact recently as a new laser source for applications. The fiber laser technology has the following advantages of being air cooled, not having thermal loading or role-over effects for the fiber, and no thermal lensing effects for the fiber.


The basic principle of a fiber laser is that the core of doped glass fiber is pumped by a diode laser of a specific wavelength and two Bragg gratings are etched into the fiber ends create the laser feedback cavity.


Several dopants and pump laser wavelength sources can be utilized to provide different wavelengths as shown in Table 1.









TABLE 1







Dopants and Fiber Laser Wavelength











Lasting wavelength



Dopant
(nm)







Ytterbium (Yb)
1080



Erbium (Er)
1550



Erbium/Ytterbium
1550



Neodymium (Nd)
1064



Thulium
2000










In an alternative embodiment of FIG. 4a, the system of the invention exploits a wide FOV (e.g., 30°) sensor 430 which requires no elevation scanning and achieves azimuth scans by use of counter-rotating Risley prism optics. In this embodiment, no gimbals are required to achieve full search, detect, and initial tracking requirements.


Turning back to FIG. 4, a preferred embodiment of the configuration of the precision tracking element 430 and incorporating the micro-channel module 1 of the invention is disclosed.


Precision tracking sensor system 430 may comprise imaging means 510 for providing an electromagnetic illumination beam 510′ having a predetermined wavelength such as an eye-safe, four milli-joule laser source pulsed at 30 Hz with seven nanosecond pulse widths operating in about the 1.5 to about 2.0 micron region.


Precision tracking sensor system 430 may further comprise holographic beam-forming optics 520 and beam-scanning means 530 which may be in the form of a tip-tilt mirror assembly for scanning the illumination beam on a target in a field of regard.


Precision tracking sensor system 430 may comprise a parabolic reflector element 540 in optical cooperation with a hyperbolic reflector element 550.


Precision tracking sensor system 430 may comprise beam-splitting optical means 560 for the division of the received optical beam input into a first and second predetermined range of the electromagnetic spectrum.


Precision tracking sensor system 430 of the invention may comprise a first photo-detector element 570 responsive to a predetermined first range of the electromagnetic spectrum and a second photo-detector element 580 responsive to a predetermined second range of the electromagnetic spectrum. The first and second photo-detector elements 570 and 580 may each be selected to be responsive to predetermined ranges of the electromagnetic spectrum selected from the ultraviolet, visible, near-infrared, short-wave infrared, medium-wave infrared, long-wave infrared, far-infrared and x-ray ranges of the electromagnetic spectrum.


At least one of the first and second photo-detector elements may comprise a module 1 of the invention.


The parabolic reflector element 540 and the hyperbolic reflector element 550 are preferably configured as a Cassegrain reflector telescope assembly as is depicted in the various drawing figures.


The illumination beam 510′ is projected through and incoming electromagnetic radiation is received through a common aperture 590. The incoming electromagnetic radiation includes incoming visible and SWIR from the scene as well as the reflected laser echo from the target surface.


One or more optical notch or band-pass filters may optionally be provided between the beam-splitter and the first or second photo-detector elements or both to narrow the range of electromagnetic frequencies received by them from the split input beam.


The first and second photo-detector elements 570 and 580 may be provided in precision tracking sensor system 430 wherein at least one of the first and second photo-detector elements 570 and 580 comprises electronic module 1 comprising a stack of layers wherein the layers comprise a micro-lens array layer 10, a photocathode layer 20 for generating a photocathode electron output in response to a predetermined range of the electromagnetic spectrum, a micro-channel plate layer 30 comprising at least one channel 25 for generating a cascaded electron output in response to the photocathode electron output and a readout circuit layer 10 for processing the output of the micro-channel layer.


In a preferred embodiment, the disclosed system concept samples over 200 million pixels/sec. to provide required area search rates. Real-time processing of this data flow requires extremely high throughput processors with low power consumption in order to have a practical, affordable system. Applicant's innovative 3D stacked electronics technology enables a multi-FPGA, very high throughput solution.


The SNR calculation method for a preferred embodiment of the invention is schematically shown in FIG. 5 and in the following discussion; details are discussed on each one of the system elements illustrated.


The collected signal energy is calculated from the relationship:










E
s

=



E
L



N
x



N
y



·

η
T

·

R
surface

·

Ω
π

·

η
R






(
1
)







Where
















Es
Signal energy
J/pulse/pixel


EL
Laser energy
J/pulse/pixel


Nx
Laser size in detector pixels (x-dir)
pixel


Ny
Laser size in detector pixels (y-dir)
pixel


ηT
Transmission efficiency



Rsurface
Surface BRDF—Lambertian



Ω
Solid angle of collection
steradian


ηR
Receiver efficiency









The solid angle is calculated from the relationship:









Ω
=


π
4




D
2


Rn
2







(
2
)







Where



















D
Aperture diameter
m



Rn
Range
m










Notice that the solid angle in equation (1) is divided by n because the surface reflectivity is assumed to be constant and corresponds to a Lambertian surface.


The photon energy is calculated from:










E
ph

=

hc
λ





(
3
)







Where



















Eph
Photon energy
J/photon



h
Planck's constant
J.sec



c
Speed of light
m/sec



λ
Laser wavelength
micron










The number of signal photons, ns is then determined by dividing the signal energy by the photon energy, i.e.,










n

ph
,
s


=


E
s


E
ph






(
4
)







The detector/electronic system bandwidth is designed to match the laser pulse time. The laser pulse width (full width at half maximum—FWHM) is one of the performance parameters. The bandwidth is determined from the laser pulse width using the relationship:









B
=

0.5

t
rise






(
5
)







Where



















B
System bandwidth
Hz



trise
Laser rise time
Sec










The temporal distribution of the laser output is Gaussian. Therefore the laser rise time is the time taken to go from 0.1 to 0.9 of the maximum energy. Thus, for a Gaussian distribution






t
0.1=√{square root over (−2σ2 ln(0.1))}  (6)






t
0.9=√{square root over (−2σ2 ln(0.9))}  (7)










σ
2

=


-

t
0.5
2



2






ln


(
0.5
)








(
8
)









t
rise
=t
0.1
−t
0.9  (9)


Where t0.5 is the half width at full maximum, and t0.9 is the standard deviation.


The atmospheric transmission efficiency for different environments is calculated using MODTRAN. In this analysis, Applicant has documented the results of two cases: (a) clear atmosphere, and (b) light rain (low density of water droplets). The model uses both Mie scattering and Dermendjian distribution to calculate the extinction coefficient as well as the MODTRAN output.


The number of signal electrons generated at the FPA anode is determined by the EPA quantum efficiency and gain through the relationship:






n
e,s
=n
ph,sηQGFx  (10)


Where


ne,s # of anode signal electrons


nph,s # of photons


ηQ APD quantum efficiency


G APD gain


Fx pixel fill factor


The dark current noise is usually divided into bulk and surface dark current noise. The surface dark current noise is very small and will be neglected. The bulk dark current is






I
db=1 nAmp  (11)


Therefore the dark current noise is calculated from the relationship:






custom-character
i
n,dark
2
custom-character=2gIdbBG2F  (12)


The dark current noise could be converted into a number of electrons using the relationship:










n
dark_current
2

=




i

n
.
dark

2





(

2





qB

)

2






(
13
)







Where

    • custom-characterin,dark2custom-character ensemble average of the square of the dark current noise
    • ndark ensemble average of the dark current noise (electrons)
    • q Electron charge
    • IDB Bulk dark current noise
    • B System bandwidth
    • G APD gain
    • F Excess noise factor


The shot noise arises from the random statistical Poissonian fluctuations of the signal electrons. The shot noise is calculated from:






custom-character
i
n,shot
2
custom-character
=2qisBG2F  (14)


Here, is the photo-electron current (before any gain or amplification). The shot noise is also calculated as a number of electrons using the relationship:






n
shot
2=(nph,sηQFx)G2F  (15)


The FPA/electronic system total noise is the rms average of the sum of the squares of the individual noise. Thus





noisetotal−√{square root over (nshot2+ndark current2+npre-amplifier2)}  (16)


The signal to noise ratio is calculated from:









SNR
=


#





of





signal





electrons


#





of





noise





electrons






(
17
)







The signal level is scaled with the range as:











S
1


S
2


=


(


R
2


R
1


)

2





(
18
)







Where Si is the signal at range Ri. Therefore using the signal at the pivot range, the number of anode electrons ne,s at any range Rn is calculated from:










n

e
,
s


=


n

e
,
s









Pivot




·


(


Rn
Pivot

Rn

)

2







(
19
)







The shot noise is the only noise affected by the signal. Therefore the shot noise nshot at any range Rn is calculated from the shot noise at the pivot range such that:










n
shot
2

=


n
shot
2








Pivot




·


(


Rn
Pivot

Rn

)

2







(
20
)







Therefore the SNR becomes:










SNR


(
Rn
)


=



n

e
,
s









Pivot




·


(


Rn
Pivot

Rn

)

2







n
darkcurrent
2

+

n

pre


-


amplifier

2

+

n
shot
2









Pivot




·


(


Rn
Pivot

Rn

)

2









(
21
)







Similarly, the scaling laws for the SNR for different reflectivities, R, could be expressed as:










SNR


(

Rn
,
R

)


=



n

e
,
s









Pivot




·


(


Rn
Pivot

Rn

)

2

·

(

R

R
Pivot


)







n
darkcurrent
2

+

n

pre


-


amplifier

2

+

n
shot
2









Pivot




·


(


Rn
Pivot

Rn

)

2

·

(

R

R
Pivot


)









(
22
)







The scaling laws for the different receiver aperture diameter are:










SNR


(

Rn
,
R
,
D

)


=



n

e
,
s









Pivot




·


(


Rn
Pivot

Rn

)

2

·

(

R

R
Pivot


)

·


(

D

D
Pivot


)

2









n
darkcurrent
2

+

n

pre


-


amplifier

2

+







n
shot
2








Pivot




·


(


Rn
Pivot

Rn

)

2

·

(

R

R
Pivot


)

·


(

D

D
Pivot


)

2












(
23
)







Using the above, the performance of a baseline precision tracking sensor system 430 is given in FIGS. 6 and 7 for different atmospheric conditions. The performance has been calculated for different target sizes: 2, 0.5, 0.25, 0.1 m2 target area. The figures show that the system is capable of detecting the smallest targets at 3 km distances with SNR=6.


This disclosed approach to tracking UAS's incorporates the foundation of Kalman filter theory and provides a sophisticated clutter rejection scheme. This target tracking theory employs the temporal sequence of position measurements to update a corresponding sequence of target state estimates.


The general UAS tracking idea is that target position at the next measurement time is first predicted on the basis of an underlying propagation model, and then this prediction is updated using the available measurement information. This two-step procedure is repeated for each new measurement. By using a dynamic system model with a large ‘process noise’ variance and a small ‘observation noise’ variance, the procedure can be tuned to rely more strongly on the observational (measurement) data and less strongly upon the propagation model.


Such tuning is appropriate for tracking UAS's which are subject to unknown forces (e.g., increased thrust or changes in aerodynamic forces due to the movement of control surfaces). Further, with such weak reliance on the propagation model, a relatively simple dynamic system model should suffice.


Two types of clutter are addressed: stationary echoes (e.g., reflections off buildings) and moving echoes (e.g., flying birds). A simple approach for eliminating stationary clutter is to define a horizon, for example from building top to building top in an urban environment, and only process data above that line. While this simple approach can effectively track high-azimuth targets, it fails to track low-azimuth targets. A more robust approach enables tracking of both high and low-azimuth targets. This approach develops a 3D map of fixed-position reflectors that are located in range as well as bearing and azimuth. Such a map enables tracking a UAS passing at low azimuth in front of local buildings. Moreover, when a UAS is obscured and its track is lost (e.g., when the UAS passes behind a local building), a user can reacquire its track once the UAS again becomes visible.


A key issue arising in environments with moving clutter is that of data association: during the update step, the tracking algorithm must choose which of the available measurements to associate with the current track. One way to address this issue is to simply associate the measurement that is closest to the predicted target position with the current track; however, if the UAS has undergone a maneuver, this closest measurement may well represent a clutter response and thereby lead to a tracking error.


Another approach is to spawn several tracks, each associated with a different measurement; if enough tracks are spawned, one will include the correct data association. The obvious challenge with this approach is to keep the number of tracks to a manageable level by (a) judiciously spawning new tracks, instead of exhaustively following all possible data associations, and (b) eliminating old tracks whose track-history appears inconsistent with UAS behavior. For example, over time, spurious tracks are likely to exhibit erratic movements with excessive velocities and/or accelerations. Further, a large number of tracks with a ‘parallel’ track history may be indicative of a flock of birds; such track groups can be tagged and, when they are sufficiently separated from the more interesting target tracks, eliminated.


This detection-tracking approach can be validated using a scenario simulator to generate synthetic laser data consistent with one or two realistic threats. The simulator may include a UAS flight model that can be programmed for realistic speed and heading maneuvers and incorporate the effects of wind. The simulated laser data is thus consistent with laser errors in 3D space to allow us to assess tracking performance versus laser targeting accuracy. The simulator also is used to compare threat tracking accuracy to the requirements for various interdiction solutions.


Target classification is based on threat size estimates and flight dynamics. The high resolution of the laser output guarantees multiple hits from the target from which an image profile can be constructed. Target tracks are used to estimate the aspect angle of the threat which is then to be used to translate the imaging hits to threat dimensions. The images are refined over time as the target maneuvers and compared to a library of candidate UAS threats to select the most likely UAS model. In some cases, model identification can provide insight to the RF link to the UAS which enables the threat to be neutralized via jamming.


Once a threat has been identified, there are several possible interdiction reactions. One technique is to jam the communications link between the UAS and its operator to render it ineffective (thwart data download). The RF control link can also be jammed but a UAS flying via waypoints would not be impacted. The use of jamming may impact friendly RF-based systems so one needs to take into account the operational configuration. Selective jamming (i.e., spatially focused jamming) is a reasonable option.


The light returning from a small target may be very low level so the optics are optimized to efficiently capture the returning light for processing. This requires a large collection aperture for the wide field of view optics, and a beam steering mechanism to match the field of view to the illuminating laser. The latter may be accomplished by using a rotating Risley prism embodiment which permits fast field of view scanning with limited power requirements.


The collection optics of the Risley prism embodiment provides a large aperture to efficiently capture the light returning from a small target. As the field of view increases, the aperture decreases resulting in a tradeoff between FOV and aperture. A good balance is found with a 30° HFOV with a 100 mm diameter clear aperture. From this starting point, the specifications for a Risley embodiment are calculated as follows:















Sensor Field of View
30° × 0.25°


Sensor Clear Aperture Diameter
100 mm


Optics IFOV
128 microradians


Lens focal length
115 mm


Lens F/#
F/1.2


Lens resolution (as-built)
>20% MTF at 35 lpm


Compatible with FPA Size
4,096 × 32 pixels


Compatible with FPA Pixel Pitch
15 microns


Image circle diameter
64 mm diameter


Spectral Colverage-(1)
1.53-1.58 microns


A Thermal ΔT Range For Design
−40° to +60° C.


Counter Rotating Wedges For Azimuth
±45°


Scan (if using Risely prisms)



Beam pointing accuracy
Better than 250 microradians






(1)Filtered down to ~10 nanometers FWHM. Peak transmission of optics to occur in the region indicated.







Lens design is done using a least squares method for minimizing the merit values for the given input variables.


Because of this narrow-area scan design of the device, the lens field of view is scanned to match the illumination laser light. In one embodiment, this is accomplished through the use of counter rotating prisms allowing the beam to be steered +/−45°. This system of Risley prisms are illustrated in FIGS. 8a and 8b.


Generally, with the apex of the two prisms pointing in opposite directions, the prisms act as a parallel side plate of glass and impart no deviation to the light. This is the orientation when the illumination laser is at 0° (parallel) to the optical axis.


As the prisms rotate 90° about the optic axis, they end up with the apexes pointed in the same direction. In this case, the two prisms act as a single prism with an included angle equal to twice the included angle of either prism alone. This allows for maximum deviation of 45°.


Another 90° rotation and the prisms are pointing in opposite directions imparting no light deviation. Another 90° rotation aligns the apex angles again, this time diverting the light −45°. As the prisms rotate at a constant speed, the light deviation varies from +45° to −45° in a sinusoidal pattern.


This rotation must be accomplished quickly and smoothly. A dual stepper motor controls the speed and position of the prisms accurately to keep them aligned and in sync with the illumination laser. This motor is an integral part of the optical barrel design.


An exemplar Risley pair with the rotating prism shown at a 180 degree difference in position is shown in two views of FIGS. 8a and 8b.


The incoming light k1 enters the prism pair on the left and is redirected in the direction of k3. As shown in FIG. 8a, if the prisms have their wedge normals aligned, there is just translation of the output beam k3 with respect to the input beam k1. If the wedge normals are pointed opposite each other as shown in FIG. 8b, the output beam experiences a maximum elevation deviation. The direction cosines for a beam emerging from such a Risley pair are given below.













[




k

3

x







k

3

y







k

3

z





]

=

[




cos





φsin





θ






sin





φsin





θ






cos





θ




]







=

[





β





sin





α

+

cos





φ





sin





α






1
-

n
2

+


γ
2



(
φ
)




-

γ


(
φ
)












sin





φ





sin






α


[



1
-

n
2

+


γ
2



(
φ
)




-

γ


(
φ
)



]









(

1
+

β





cos





α


)

+

cos






α


[



1
-

n
2

+


γ
2



(
φ
)




-

γ


(
φ
)



]







]








(
1
)







Where





β=√{square root over (n2−sin2 α)}−cos α  (2)





And





γ(φ)=cos α+β(cos2 α+cos φ sin2 α)  (3)


Where the angle is the azimuthal rotation between prisms.


Therefore, the Risley pair can be used to redirect a light beam to any elevation angle and azimuthal angle limited only by the wedge angle of the prisms, and given by the azimuthal rotation between prisms.


Many alterations and modifications may be made by those having ordinary skill in the art without departing from the spirit and scope of the invention. Therefore, it must be understood that the illustrated embodiment has been set forth only for the purposes of example and that it should not be taken as limiting the invention as defined by the following claims. For example, notwithstanding the fact that the elements of a claim are set forth below in a certain combination, it must be expressly understood that the invention includes other combinations of fewer, more or different elements, which are disclosed above even when not initially claimed in such combinations.


The words used in this specification to describe the invention and its various embodiments are to be understood not only in the sense of their commonly defined meanings, but to include by special definition in this specification structure, material or acts beyond the scope of the commonly defined meanings. Thus if an element can be understood in the context of this specification as including more than one meaning, then its use in a claim must be understood as being generic to all possible meanings supported by the specification and by the word itself.


The definitions of the words or elements of the following claims are, therefore, defined in this specification to include not only the combination of elements which are literally set forth, but all equivalent structure, material or acts for performing substantially the same function in substantially the same way to obtain substantially the same result. In this sense it is therefore contemplated that an equivalent substitution of two or more elements may be made for any one of the elements in the claims below or that a single element may be substituted for two or more elements in a claim. Although elements may be described above as acting in certain combinations and even initially claimed as such, it is to be expressly understood that one or more elements from a claimed combination can in some cases be excised from the combination and that the claimed combination may be directed to a subcombination or variation of a subcombination.


Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements.


The claims are thus to be understood to include what is specifically illustrated and described above, what is conceptually equivalent, what can be obviously substituted and also what essentially incorporates the essential idea of the invention.

Claims
  • 1. A sensor system comprising: a first precision tracking element comprising imaging means for providing an electromagnetic illumination beam having a predetermined imaging wavelength and at least one precision tracking photo-detector element,an acquisition sensor element comprising at least one acquisition photo-detector element,wherein at least one of the photo-detector elements comprises an electronic module comprising a stack of layers wherein the layers comprise a micro-lens array layer having at least one individual lens element for providing a beam output,a photocathode layer for generating a photocathode electron output in response to a predetermined range of the electromagnetic spectrum,a micro-channel plate layer comprising at least one micro-channel for generating a cascaded electron output in response to the photocathode electron output, and,a readout circuit layer for processing the output of the micro-channel.
  • 2. The sensor system of claim 1 wherein the acquisition sensor element is mounted on a rotating base element that rotates about a first axis.
  • 3. The sensor system of claim 1 wherein the precision tracking sensor element is movably mounted to a housing to permit the selective direction toward a predetermined scene of interest.
  • 4. The sensor system of claim 1 further comprising at least one Risley prism assembly.
  • 5. The sensor system of claim 1 comprising Gray code counter circuit means.
  • 6. A sensor system comprising: a first precision tracking element comprising imaging means for providing an electromagnetic illumination beam having a predetermined imaging wavelength, scanning means for scanning the illumination beam on a target, a parabolic reflector element, a hyperbolic reflector element, beam-splitting means, a first precision tracking photo-detector element responsive to a predetermined first range of the electromagnetic spectrum having a first active field of view, a second precision tracking photo-detector element responsive to a predetermined first range of the electromagnetic spectrum having a first passive field of view, and,at least one acquisition sensor element comprising an acquisition photo-detector element having a second field of view,wherein at least one of the photo-detector elements comprises an electronic module comprising a stack of layers wherein the layers comprise a micro-lens array layer comprising at least one individual lens element for providing a beam output,a photocathode layer for generating a photocathode electron output in response to a predetermined range of the electromagnetic spectrum,a micro-channel plate layer comprising at least one micro-channel for generating a cascaded electron output in response to the photocathode electron output, and,a readout circuit layer for processing the output of the micro-channel.
  • 7. The sensor system of claim 6 wherein the parabolic reflector element and the hyperbolic reflector element are configured as a Cassegrain reflector telescope assembly.
  • 8. The sensor system of claim 6 wherein the illumination beam is projected through and incoming electromagnetic radiation is received through a common aperture.
  • 9. The sensor system of claim 6 wherein at least one of the first and second precision tracking photo-detector elements comprises an electronic module comprising a stack of layers wherein the layers comprise, a micro-lens array layer comprising at least one lens element for providing a beam output,a photocathode layer for generating a photocathode electron output in response to a predetermined range of the electromagnetic spectrum,a micro-channel plate layer comprising at least one micro-channel for generating a cascaded electron output in response to the photocathode electron output, and,a readout circuit layer for processing the output of the micro-channel.
  • 10. The sensor system of claim 6 wherein the readout circuit layer comprises a first sub-layer and a second sub-layer that are electrically coupled by means of a through-silicon via.
  • 11. The sensor system of claim 6 further comprising a thermoelectric cooling layer.
  • 12. The sensor system of claim 6 wherein the beam output of the lens element is substantially collimated.
  • 13. The sensor system of claim 6 wherein the readout layer is comprised of a set of readout sub-layers comprising a capacitor top metal and analog preamp sub-layer, a filtering and comparator sub-layer and a digital processing sub-layer.
  • 14. The sensor system of claim 6 wherein the predetermined ranges of the electromagnetic spectrum comprise ranges selected from the ultraviolet, visible, near-infrared, short-wave infrared, medium-wave infrared, long-wave infrared, far-infrared and x-ray ranges of the electromagnetic spectrum.
  • 15. The sensor system of claim 6 wherein the micro-channel plate is comprised of at least one micro-channel having a diameter of less than about 10 microns.
  • 16. The sensor system of claim 6 wherein the micro-channel plate is comprised of at least one micro-channel having a diameter of less than about five microns.
  • 17. The sensor system of claim 6 wherein the acquisition sensor is mounted on a rotating base element that rotates about a first axis.
  • 18. The sensor system of claim 6 wherein the precision tracking sensor element is movably mounted to a housing so as to be selectively directed toward a predetermined scene of interest.
  • 19. The sensor system of claim 6 further comprising at least one Risley prism assembly.
  • 20. The sensor system of claim 6 comprising Gray code counter circuit means.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application No. 61/442,404, filed on Feb. 14, 2011 entitled “LADAR System Comprising Large Area Focal Plane Array and Risley Prism Beam Steering” pursuant to 35 USC 1.19, which application is incorporated fully herein by reference. This application is a continuation-in-part application of U.S. patent application Ser. No. 12/924,141 entitled “Multi-layer Photon Counting Electronic Module”, filed on Sep. 20, 2010, which in turn claims priority to U.S. Provisional Patent Application No. 61/277,360, entitled “Three-Dimensional Multi-Level Logic Cascade Counter” filed on Sep. 22, 2009, pursuant to 35 USC 119, which applications are incorporated fully herein by reference. This application is a continuation-in-part application of U.S. patent application Ser. No. 13/108,172 entitled “Sensor Element and System Comprising Wide Field of View 3-D imaging LIDAR”, filed on May 16, 2011, which in turn claims priority U.S. Provisional Patent Application No. 61/395,712, entitled “Autonomous Landing at Unprepared Sites for a Cargo Unmanned Air System” filed on May 18, 201.0, pursuant to 35 USC 119, which applications are incorporated fully herein by reference. This application is a continuation in part application of U.S. patent application Ser. No. 13/338,328 entitled “Stacked Micro-channel Plate Assembly Comprising a Microlens” filed on Dec. 28, 2011 and which claims priority to U.S. Provisional Patent Application No. 61/460,173, filed on Dec. 28, 2010 pursuant to 35 USC 1.19, which applications are incorporated fully herein by reference. This application is a continuation in part application of U.S. patent application Ser. No. 13/338,332 entitled “Sensor System Comprising Stacked Micro-Channel Plate Detector” filed on Dec. 28, 2011 which claims priority to U.S. Provisional Patent Application No. 61/460,172 filed on Dec. 28, 2010 pursuant to 35 USC 119, which applications are incorporated fully herein by reference.

Provisional Applications (5)
Number Date Country
61442404 Feb 2011 US
61277360 Sep 2009 US
61395712 May 2010 US
61460173 Dec 2010 US
61460172 Dec 2010 US
Continuation in Parts (4)
Number Date Country
Parent 12924141 Sep 2010 US
Child 13372184 US
Parent 13108172 May 2011 US
Child 12924141 US
Parent 13338328 Dec 2011 US
Child 13108172 US
Parent 13338332 Dec 2011 US
Child 13338328 US