The present invention relates to systems and methods used to determine and/or compensate for the motion of individual sensors in sensor arrays such as sonars and radars. More particularly, the present invention relates to the use of Micro Electro Mechanical System Inertial Measurement Units (MEMS IMUs) in sensor arrays.
A sensor array is a collection of sensors, usually arranged in an ordered pattern, used for collecting and processing electromagnetic or acoustic signals. The sensors can be active (transmitter/receiver module arrays) or passive (receive only). For example, an array of radio antenna elements used for transmitting and receiving, either with or without beamforming can increase the gain in the direction of the signal while decreasing the gain in other directions by the use of shifting phase. An example of a passive application is the use of a sensor array to estimate the direction of arrival of electromagnetic waves. The sensor “array” can also be a synthetic or distributed antenna or virtual array consisting of as few as a single element moving in space with measurements made at multiple times, and hence, positions, up to hundreds, thousands, or more elements. These applications include synthetic aperture radar (SAR) and sonar and radio direction finding (RDF).
Sensor arrays are sensitive in one way or another to motion, including the overall motion of the system, and to internal distortions of the system. The accuracy of signal processing, particularly processing over time, is limited by platform and sensor motion detection and compensation methods. In fact, motion compensation techniques are one of the key limits to overall system performance. For sensor arrays such as phased array radars or sonars, the phase of each transducer in the array must be controlled, typically by time delays. Consequently, accurate timing is important. However, if the transducers are moving relative to each other, additional time delays must be added or subtracted to correct for errors in the relative phase delays. At some frequencies of interest, the wavelengths can be on the order of millimeters. At such frequencies, very small submillimeter vibrations can affect the signal resolution.
Synthetic aperture radars need to accurately track the position and velocity of the sensor array and the individual elements over time, so having accurate data about the position of the transducers as a function of time is important. Again, not only is the overall position of the array in time important, but also the position of the transducers relative to each other. This can be particularly important with towed sonar arrays which can be flexible and can move considerably perpendicular to the azimuthal, or travel, direction.
Referring to
Referring to
The gimbal-mounted strapdown IMU approach, in which the IMU is attached to an appropriate location on the gimbal assembly itself, cannot account for a wide range of motions that can occur between the location of the strapdown IMU and the actual sensor aperture (i.e., the radar face or FLIR optical window). These motions include various movement or aerodynamic induced torsional movements, heat driven distortions, vibrations, bearing rumble, drive motor and gear rumble, gear backlash, toothed drive belt stretch and contraction, or drive train jump. G forces and thermal loading can also distort the aperture itself, or its mounts. This movement and distortion is typically not uniform across the aperture, meaning that some portions of the aperture may move more and in a different manner than other portions of the aperture, thereby distorting the wavefronts as illustrated in
In light of the preceding, various challenges still exist for determining and/or compensating for the motion of the sensor elements, such as transmit and/or receive (T/R) modules of a radar or sonar.
In accordance with an aspect, a system and a method for determining the position of sensor elements in a sensor array are provided. The sensor array comprises a plurality of sensor elements, which are optionally arranged in an array. The system comprises a plurality of MEMS IMUs, each associated with one or more of the sensor elements of the sensor array. In some embodiments, a MEMS IMU comprises a MEMS inertial sensor mounted with an integrated circuit (IC) chip. Preferably, the MEMS sensor has six or more degrees of freedom (DOF), and is able to measure both acceleration and angular rate of movement. The system further comprises a controller for determining at least one of the attitude and the position of the sensor elements, based on the acceleration and angular rate measured by the MEMS IMUs. Preferably, the MEMS IMUs are mounted directly onto or in close proximity to the radio frequency (RF) or acoustic element transmit/receive apertures.
In some embodiments, the MEMS IMU includes a low-drift clock for accurate timing. The clock may be a MEMS clock integrated into the MEMS chip or a MEMS or quartz clock integrated in the IC. Check signals can also be provided by a clock mounted on a circuit board with the MEMS sensor, or from a system processor or from a remote networked clock.
In some embodiments, the system further comprises an inertial navigation unit (INU), and the controller also determines or estimates the attitude and position of the sensor elements based on measurement signals from the INU. The system can included programmed data that includes reference positions for the sensor elements. This position reference data can represent a static position of the sensor array elements, or position data using fixed coordinates or a computed position data such as an average over time. This can define the platform position from which adjusted position data are computed as described herein to alter a transmission or reception characteristic of one or more sensor elements at any point in time. The beamsteering and/or beamforming operation of the array can thereby be precisely controlled to improve array detection and imaging capabilities.
In some embodiments, the system calculates an average position and attitude of the sensor elements based on acceleration and angular rate measured by the MEMS IMUs, and based on the attitude or position estimated from the INU.
In some embodiments, the attitude and position measured by the MEMS IMU is used to determine a phase shift to apply to each sensor element to change the array beam pointing angle. The phase shifting can be applied using different types of circuits used to delay the transmission pulse at each transmission sensor element or channel and/or apply selected delays at each receive sensor element or channel. The beamsteering and beamforming circuits can comprise digital beamforming integrated circuits, or alternatively, can comprise charge coupled devices (CCDs) having a plurality of channels fabricated on one or more integrated circuits. The phase delay circuits are programmable and can be adjusted in response to the position and motion data generated by the inertial measurement array of sensors that is distributed across the sensor array. The system of controller or processor is programmable and includes one or more memories that store executable software modules including modules that control beam scanning parameters such as amplitude, phase, and frequency of transducers in a sensor array, for example.
In some embodiments, the controller includes a filter which filters out high frequency vibrational IMU data from low frequency navigational data. The attitude and/or position of each sensor element provided with a MEMS IMU is measured from the long term navigation data. The attitude and position of the platform can be determined by averaging the position data from each MEMS IMU. The position and/or attitude of each IMU relative to the platform can be determined, using local short term vibrational data. The desired phase of each sensor element is next determined based on a predetermined pointing angle. Optionally, the phase is compensated for vibrations and used to modify the array beam pointing angle.
In some other embodiments, the system is a platform including a plurality of MEMS IMUs coupled to sensor elements, and a controller configured to measure the position and/or attitude of each sensor element, to determine the phase shift to apply to the individual sensor elements.
In accordance with another aspect, a “virtual system IMU” (VSIMU) is provided, the VSIMU being formed by a plurality of MEMS IMUs, each mounted on, or in close proximity to, one or more individual sensor elements. In some embodiments, the sensor system can be non-localized, and the sensor elements may be distributed, such as on unmanned vehicles thus allowing the formation of a virtual or distributed array.
In accordance with another aspect, an improved sonar or radar is provided, for which each sensor element is provided with a MEMS IMU mounted thereon, each MEMS IMU being in communication with a controller configured to determine the position and/or attitude of each sensor element. Alternatively, a selected group or subsets of sensor elements can be actuated as a subarray wherein each subarray is associated with a selected inertial measurement unit.
Radars and sonars, particularly airborne or seaborne radars with advanced features including Electronically Scanned Arrays (ESA), Synthetic Aperture Radars (SAR), Inverse Synthetic Aperture Radars (ISAR), Ground Moving Target Indicator (GMTI), Coherent Change Detection, and Synthetic Aperture Sonar (SAS) benefit from precise motion detection. Tracking sensitivity and accuracy are limited by uncertainties in platform and element velocity changes (acceleration). Platform roll, pitch, and yaw introduce additional pointing angle and Doppler spreading errors across the face of the array.
It is desirable to be able to measure the distortion, both linear and angular (displacement and torsion), for at least some of the sensor elements, and if possible at each sensor element, to correct the phase shift errors and reconstruct the desired wavefronts. Microelectromechanical Systems (MEMS) accelerometers and gyroscopes are attractive from a SWAP standpoint as they are small and inexpensive, and can enable multiple inertial sensors to be distributed across the array of sensor elements to determine, monitor or compensate the motion of the elements.
Although attractive from a SWAP perspective, MEMS accelerometers and gyroscopes have historically been noisy, building up position errors rapidly. The sensitivity of accelerometers and gyroscopes is limited by bias instability. Bias instability is a measure of the random noise generated by the inertial sensor and is the minimum uncertainty in the output signal of the device. For very expensive navigation grade sensors (e.g. based on fiber optic gyroscopes (FOGs) like the Honeywell HG-9900), the bias instability is on the order of 3 millidegrees/hour for the fiber optic gyro and 10 micro-g (0.1 mm/sec2) for the accelerometer. An industrial grade MEMS IMU (e.g. Bosch BMX055) can have a gyro bias instability of around 10 deg/hr and an accelerometer bias instability of around 100 μg (1 mm/sec2). Thus, MEMS sensor errors can build up much more quickly. Since MEMS gyroscopes measure angular rate, attitude errors (or angular errors) grow linearly with time. Errors in position calculated from MEMS accelerometers grow quadratically.
A new generation of MEMS IMU, referred to herein as a “3DS MEMS IMU”, has lower bias instability, such as an angular rate bias instability less than 1 deg/hr, and preferably less than 0.1 deg/hr, and more preferably less than 0.01 deg/hr, and/or an accelerometer bias instability less than 100 μg, and preferably less than 10 ug, and more preferably less than lug, without sacrificing SWAP, since they can be as small as 0.1 cm3 and weigh as little as 1 gram per unit. These 3D MEMS IMUs incorporate one or more thick inertial proof masses suspended from springs and free to move in 3 dimensions between electrodes in top and bottom caps which form, with the MEMS, a hermetic low pressure chamber. The resulting high quality factor resonance, coupled with the large masses give rise to mechanical noise and bias instability that are much lower than that of previous MEMS IMUs, which use 2D comb capacitor drive and sensing, requiring the use 5 of thinner masses. These 3DS MEMS IMUs are constructed all of conductive silicon, so the hermetic chamber also provides protection against temperature effects such as differential thermal expansion and against rf interference. Thus one or more 3D MEMS IMUs can be integrated into some or each of the sensor elements of remote sensing systems, (such as sonars and radars), giving detailed local motion information (vibration and torsional movements) about the transmit and receive surface for phase and pointing accuracy. Furthermore, the data from the plurality of MEMS IMUs can provide hundreds or thousands of motion data points, providing detailed motion information regarding the behavior of the aperture throughout the range of physical and thermal loading and allowing enhanced range and azimuth resolution beyond those possible today with a single high SWAP navigation IMU. While is it preferred to use 3D MEMS IMUs, it is possible to use other types of MEMS IMUs, provided their specifications (i.e. bias instability) allow for it. Examples of MEMS devices for the fabrication of these MEMS IMUs are described in U.S. Pat. No. 9,309,106 issued on Apr. 12, 2016, and U.S. application Ser. No. 14/622,548, filed Feb. 13, 2015 and Ser. No. 15/024,704, filed Mar. 24, 2016, the entire contents of the above referenced patents and applications being incorporated herein by reference.
Another exemplary sensor array is a Synthetic Aperture Radar (SAR) 500 as shown in
While the two examples provided above are based on radar technology, the principle of the present invention can also be used in sonar systems, or any detecting and/or positioning systems comprising a plurality of sensing and/or emitting elements, such as T/R modules. For example, referring to
The present invention is especially adapted for use in object-detecting systems, which are used to determine at least one of the range, angle and velocity of a target. Broadly described, the present invention is concerned with the mounting of MEMS IMUs, and particularly 3DS MEMS IMUs, onto individual sensor elements or subarrays of such sensor elements of position-detecting system. Given their small size, weight and reduced power consumption, and provided they allow for a minimal bias instability, such as below 1 deg/hr, MEMS-based IMUs including accelerometer and angular rate sensors (6DOF MEMS IMUs) can be mounted directly on some, and preferably on each, sensor element. The measurement signals of the MEMS IMUs can be processed directly at the sensor element, by the MEMS processing circuitry or by the sensor element processing unit, or they can be sent to a central processing unit allocated for a sub-set of the sensor elements.
LiDAR (Light Detection and Ranging) is rapidly becoming a key element in ADAS (Advanced Driver Assistance Systems) and autonomous vehicle navigation. LiDAR was developed for survey and mapping. It can produce very accurate 3D measurements of the local environment relative to the sensor. This accuracy is achieved by the emission of thousands of pulses of laser light per second and the measurement of the time of flight (TOF) between emission and the collection by a sensor of the reflected light from the environment.
Shown in
Thus, the array of modules on different sections of a wheeled ground vehicle or automobile can have selected combinations of sensors. A forward looking module is preferably configured with a plurality of sensors operating in different modes such as a radar emitter and detector, one or more cameras, a LiDAR sensor, and an ultrasound sensor which can operate to sense obstacles at different ranges. Sensor fusion programs can be used by the processor 554 to simultaneously process data from each of the plurality of sensors and automatically send navigation and control commands to the braking and steering control systems (described below) of the vehicle 540. Simultaneous location and mapping (SLAM) programs have been extensively described in the art such as, for example, in U.S. Pat. Nos. 7,689,321 and 9,945,950, the entire contents of these patents being incorporated herein by reference.
The sensor array distribution is seen in
The autonomous vehicle 560 can also include a steering module 557 that is in communication with the processor 554. The steering module 557 can control a steering mechanism in the vehicle to change the direction or heading of the vehicle. The processor 554 can control the steering module 557 to steer the car based upon an analysis of ranging data received from the array of sensor modules 561-570 to enable the autonomous vehicle 560 to avoid collisions with objects.
Two types of LiDAR systems are shown in
For a single emitter 589 (
In systems 572, 582, the emitted beam 580, 586 is reflected by the various objects in the vicinity of the sensor and a portion of the reflected beams 581 are collected by the detector (or collector) lens 578 and focused onto a detector array 577, 586. The detector array 577, 586 can be a 2D or 3D array and can include many photodetectors such as photodiodes. The time of flight (from emission to collection) of each of the beams is measured precisely by the control and data processing electronics 575, 585 using the data received from the detector array 577, 586. In this way a “point cloud” is built up wherein the distance to each point in the point cloud is accurately recorded. This point cloud is a representation of the LiDAR system's environment.
The LiDAR system 572, 582 can include control and data processing electronics 575, 585 in some embodiments. The control and data processing electronics 575, 585 can send data to and receive instructions from the processor 554. The control and data processing electronics 575, 585 can receive data from the detector 557, 586. The control and data processing electronics 575 can control the status and sequencing of emitters in the array of emitters 576. The control and data processing electronics 575 can control the deflection angle of the scanning mirror 590. In some embodiments, the control and data processing electronics 575, 585 can send raw data from the detector 577, 586 to the processor 554. In some embodiments, the processor 554 can include a global positioning system (GPS) sensor. In some embodiments, the control and data processing electronics 575, 585 can perform initial processing on the data received from the detector 577, 586 and send the processed data to the processor 554. In some embodiments, the control and data processing electronics 575 can include a steering circuit to adjust an orientation of the system 572, 582.
In some embodiments, the components of the LiDAR system 572, 582 can be mounted to a single backplane 574. For example, the backplane 574, 584 can be a printed circuit board (PCB). Additional PCBs can be added to the sensor modules on the vehicle that include other sensor modes such as radar and/or imaging cameras, for example. These sensors can also include co-located inertial sensors to compensate for sensor motion relative to the vehicle's frame of reference as described herein.
Because the speed of light is so high, the individual pulses of light are very short, on the order of a few hundred picoseconds and the time of flight is a few microseconds. Thus, an individual measurement is very accurate, to within a few cm (e.g. less than 5 cm). This enables the LiDAR system to build up a 3D map of its environment over many scans. Typically the scan or frame rate is on the order of a few tens (e.g. 10-100) of Hertz.
The LiDAR system's 3D map of the local environment is useful for identifying driving hazards such as other vehicles, pedestrians, etc. However, the vehicle upon which the LiDAR is mounted is typically moving, which complicates the mapping. The LiDAR can only measure relative distance between the vehicle and the hazard. Thus, it is important to also know the absolute geographical position of the LiDAR to accurately build up a map of the environment, complete with stationary and moving objects.
For static LiDAR applications such as surveying and mapping, GPS (Global Positioning System) data is sufficient to provide geographic location. However, it is not sufficient for a moving vehicle. The GPS receiver needs to have a clean line of sight (LOS) to at least four GNSS (Global Navigation Satellite System) satellites to obtain longitude, latitude and altitude coordinates. There may not always be direct LOS because of satellite positioning or because of obstruction of the LOS by buildings, trees, or other obstructions. Even with no obstructions, the position update rate is less than 10 Hz, which is too slow for a moving vehicle. Additionally, the LiDAR needs velocity and attitude information to accurately navigate the driving hazards.
In most navigation systems using LiDAR, GPS data is augmented by the vehicle Inertial Navigation System (INS) data. However, accuracy costs money. For defense or geodetic survey systems, an expensive Inertial Measurement Unit (IMU) can be used. These typically use expensive (thousands of dollars) accelerometers and Fiber Optic Gyroscopes (FOGs) for motion data. This cost is generally not viable for most automotive applications, so typically MEMS IMUs are used. These IMUs, while much cheaper, are much less accurate and can drift a few meters in a few seconds. Nonetheless, they are used to “fill in the gaps” between GPS readings.
Another source of positional inaccuracy that has not been addressed at all is the relative position of the LiDAR system relative to the automobile INS and emitter(s) relative to the detectors within an individual LiDAR sensor. The first error assumes the emitter and detector are fixed relative to the vehicle INS. However, particularly over many frames of data, bumps, vibrations, pitch, and roll, can introduce time dependent errors of mm or cm in the calculated position of the LiDAR. Flexing and torqueing of the printed circuit boards (PCBs) on which the optical components are mounted can introduce additional relative position errors affecting the TOF measurements and reducing the accuracy of relative position measurements. Finally, in many systems the emitters, lenses, and detectors can be on separate boards which can shift, vibrate, and torque relative to each other (
As shown in
A limitation of staring LiDARs is the limited field of view as shown in
In order to increase the field of view, one or more LiDARs 605 can be mounted to a rotating gimbal 606 to scan the array in more than one direction.
A potentially less expensive approach to increasing the FOV is to mount multiple LiDARs around the vehicle. However, in both gimballed and multiple-LiDAR solutions multiple frames of data from different times, positions, and orientations are processed and optionally stitched together to obtain an accurate, comprehensive representation of the environment. It is necessary to know the position and attitude of each of the LiDAR systems when the data is collected.
IMU plus GPS data can be incorporated into the optical TOF data by the control and data processing electronics 644 or processor 554 to accurately determine the geographic position of the environmental features detected by the detector array 646. Furthermore, the accuracy of the 3DS IMU enables multiple point clouds acquired by the same LiDAR unit and/or point clouds acquired by multiple LiDAR units to be more accurately stitched together to provide a higher resolution 3D map of the vehicle's environment. For gimbal mounted LiDAR units, the 3DS IMUs can provide real-time and accurate position and attitude of the individual LiDAR units 640.
3DS IMUs can also improve the performance of systems that include components mounted on multiple boards. In order to compensate for the relative motion of optical components on separate or flexible PCBs (or other boards or container walls) such as 662, 675 and 676 of system 660 in
The highest measurement accuracy for the position of any LiDAR component can be achieved when the IMU is in the exact position of the component. The 3DS MEMS IMU architecture 680 shown in
The laser 698 can operate at one or more wavelengths and output power levels depending upon the ranging distances and FOV for that sensor. LiDAR can use various emission wavelengths in the range of 750 nm to 1600 nm and other wavelengths depending upon the application. See, for example, U.S. Pat. Nos. 7,541,588, 8,675,181, and 9,869,754, the entire contents of these patents being incorporated herein by reference.
Although the control and data processing electronics 692 is shown as mounted to the board 691 in
An exemplary embodiment of a sensor element 610, here a T/R module, having a MEMS IMU 620 mounted thereon is shown in
Referring still to
The 6 DOF inertial sensor 2172 senses three axes of linear acceleration and three axes of angular rate. The 6 DOF inertial sensor 2172 includes first and second sets of electrodes 2180, 2182, respectively provided in the first and second cap layers 2120, 2140. One or several proof masses 2163, 2165 can be patterned in the central MEMS layer 2160, the first and second sets of electrodes 2180, 2182 forming capacitors with the proof mass(es). In
The mass of the proof thus can be designed anywhere in the range of 0.1 to 15 milligrams by adjusting the lateral dimensions (0.5 mm to 4 mm, for example, or having an area in a range of 1-3 mm2), thickness as described herein, or both. The springs which support the proof mass and the top of the mass are etched in the SCS device layer. The resonant frequency (√(k/M) can be tuned separately by adjusting the spring constant k through the thickness of the device layer and the width and length of the spring. The spring constant k is proportional to wt3/L3, where w, t, and L are the width, thickness, and length respectively of the spring. Lower frequencies (long, thin springs) around 1000 Hz are desirable for the accelerometer, while higher frequencies (short, wide springs) are desirable for the gyroscopes. Generally, resonant frequencies between 500 Hz and 1500 Hz are used for a variety of applications. The capacitor electrodes and gaps are etched into the faces of the cap wafers which are bonded to the MEMS wafer. The gaps are typically 1-5 μm thick providing sense capacitors which can range from 0.1 to 5 picofarads. Further details concerning fabrication and operation of MEMS transducer devices can be found in U.S. patent application Ser. No. 14/622,619, filed on Feb. 13, 2015 (now U.S. Pat. No. 9,309,106) and U.S. patent application Ser. No. 14/622,548, filed on Feb. 13, 2015, the above referenced patent and applications being incorporated herein by reference in their entirety.
For industrial, tactical and navigation grade applications, which include high resolution motion capture precise head tracking for virtual reality and augmented reality and personal navigation, the thick mass and as-fabricated high quality factor (˜5000) produce a gyroscope noise density ranging from 0.005 deg/√hr to 0.1 deg/√hr. The resulting gyroscope bias stability ranges between 0.05 deg/hr, and 1 deg/hr. This noise is lower than many fiber optic and ring laser gyroscopes that cost thousands of dollars more. Because existing consumer-grade MEMS gyroscopes use inexpensive packaging and have small inertial masses and sense capacitors, they have low quality factors and low angular rate sensitivities leading to large noise densities on the order of 1 deg/√hr and bias stability on the order of 10 deg/hr, inadequate for tactical and navigational use. Similarly, the accelerometer has a noise density ranging from 3 micro-g/√Hz to 30 micro-g/√Hz and bias stability ranging from 0.5 micro-g to 10 micro-g, much lower than consumer-grade accelerometers. The platform also allows the addition of other sensor types such as pressure sensors and magnetometers (shown here a 3 axis magnetometer 2176) to improve overall accuracy through sensor data fusion. The sensor data can be processed by data processor circuits integrated with the MEMS chip and IC chips as described herein, or by external processors. For navigation grade applications, which include high performance unmanned vehicle and autonomous navigation including in GPS restricted and GPS denied environments, two masses can be combined in an antiphase drive mode to not only increase the effective mass by a factor of √2, but to increase the quality factor by reducing mechanical energy losses. This approach can produce a gyroscope noise density ranging from 0.002 deg/√hr to 0.01 deg/√hr and bias stability ranging between 0.01 deg/hr, and 0.1 deg/hr.
The MEMS chip 2100 includes first and second insulated conducting pathways, 2130, 2150, similar to those described previously. The first insulated conducting pathways 2130 connect the MEMS electrodes 2180, 2182 to a first set 2124 MEMS-electrical contacts, on the first cap layer 2120. The second insulated conducting pathways 2150 extend through the entire thickness of the MEMS chip 2100, allowing the transmission of auxiliary (or additional) signals through the MEMS chip 2100. The second insulated conducting pathways 2150 connect a second set 2126 of MEMS-electrical contacts of the first cap layer 2120 to some of the MEMS-electrical contacts 2144 of the second cap layer 2140. For clarity, only some of the first insulated conducting pathways are indicated in
Referring to
Referring back to
In the embodiment of
Analog data can be communicated between the MEMS sensors 2172, 2176 and the IC chip 2200 at an analog-to-digital converter (ADC) input/output mixed signal stage of the IC chip 2200. The MEMS signals generated by the sensors 2172, 2176 are analog signals, so they are converted to digital by the ADC to be further processed in the digital CMOS portion of the IC chip 2200. The data processing of the MEMS signals by the IC chip 2200 can include sensor calibration and compensation, navigational calculations, data averaging, or sensor data fusion, for example. System control can be provided by an integrated microcontroller which can control data multiplexing, timing, calculations, and other data processing. Auxiliary (or additional) signals are transmitted to the IC chip via additional digital I/O. The IC chip 2200 includes auxiliary signal processing circuitry, such as for example wireless communications or GPS (Global Positioning System) functionality. The GPS data can also be used to augment and combine with MEMS sensor data to increase the accuracy of the MEMS sensor chip 2100. These are examples only, and more or fewer functions may be present in any specific system implementation. As can be appreciated, in addition to providing the analog sensing data via the MEMS signals, the MEMS chip 2100 can also provide an electronic interface, which includes power, analog and digital I/O, between the MEMS system 2000 and the external world, for example, a printed circuit board in a larger system.
As per the embodiment shown in
During the fabrication process of the MEMS stack 110, channels are etched in the first and second layers to define the borders of electrodes, leads, and feedthroughs on the inward-facing surfaces of the first and second silicon wafers. The channels are then lined, or filled, with an insulating material such as thermal oxide or CVD (Chemical Vapor Deposition) silicon dioxide. Both sides of the central MEMS wafer, which is typically an SOI wafer, are patterned with electrodes and MEMS structures, such as membranes and proof masses. Conductive shunts are formed in specific locations in the buried oxide layer, to allow electrical signals to pass from the device to the handle layer, through what will become the insulated conducting pathways. The central and cap MEMS wafers are also patterned with respective frames enclosing the MEMS structures. The various conducting pathways required by the device are constructed by aligning feedthrough structures on each level. The portion of the insulated conducting pathways in the central MEMS wafer can be isolated either by insulator-filled channels or by etched open trenches since the MEMS wafer is completely contained within the stack and the isolation trenches do not have to provide a seal against atmospheric leakage like the cap trenches. The frames are also bonded so as to form hermetically sealed chambers around the MEMS structures. After the wafer stack 110 is assembled, the cap wafers are ground and polished to expose the isolated conducting regions.
The bonded 3DS wafer can now be diced (along the dotted lines in
Referring to
MEMS signals for the MEMS chip 4104 can also transit through the MEMS chip 4102, up to the IC chip 4200. The first MEMS chip 4102 comprises a third set of first cap MEMS-electrical contacts and third insulated conducting pathways 4170 to connect the first cap MEMS-electrical contacts of the third set to at least some of the second cap MEMS-electrical contacts of the second cap layer of MEMS chip 4102, through the first cap layer, the central MEMS layer and the second cap layer. These third insulated conducting pathways 4170 are electrically connected to the MEMS signal processing circuitry 4240 of the IC chip 4200, and are electrically connected to insulated conducting pathways 4130′ of MEMS chip 4104. The MEMS signal processing circuitry 4240 can thus process the electrical MEMS signals of the first and of said at least one additional single MEMS chips. The MEMS-signal processing circuitry 4240 can thus process MEMS-signals from both MEMS chips 4102 and 4104.
Of course, while in the embodiment shown in
Referring to
Referring to
Referring to
Referring to
Referring to
By adding GPS/GNSS functions and sensor fusion algorithms, for example Kalman filters, to the IC, the 3DS IMU can be enhanced to become a 3DS INU.
The sensor system can also be non-localized. That is, rather than being part of a fixed rigid or flexible array, the sensing elements can be distributed, for example in an array or group of semi-autonomous vehicles, such as unmanned air, underwater, and ground vehicles (UAVs, UUVs, UGVs), collectively referred to as UVs, each with at least one 3DS IMU, or on multiple space platforms or satellites. Position and attitude from each UV's IMUs is communicated to a central processor located either in one or more UV, or in a ground- or air-based control station via a communications system, that can be RF or optical, and of any topology, for example, point-to-point, star, ring, tree, hybrid, daisy chain, or other.
Antennas are critical elements of many electronic systems. An antenna is a specialized transducer that converts radio-frequency (RF) fields into alternating current (AC) or vice-versa. There are two basic types: the receiving antenna, which intercepts RF energy and delivers AC to electronic equipment, and the transmitting antenna, which is fed with AC from electronic equipment and generates an RF field. An antenna reflector is a device that reflects electromagnetic waves. Examples of antenna reflectors are illustrated in
An antenna array is a set of individual antennas used for transmitting and/or receiving radio waves, connected together in such a way that their individual currents are in a specified amplitude and phase relationship.
A phased array antenna is composed of multiple radiating elements each with a phase shifter. Beams are formed by shifting the phase of the signal emitted from each radiating element to provide constructive/destructive interference so as to steer the beams in the desired direction. Phased array antennas can be arranged in linear arrays, which can form beams in one dimension and are typically moved mechanically, or planar arrays, which can generate 2D images. Phased arrays use computer controlled phase shifters to create beams. Nearly undetectable motion at a very low level affects the phase relationships between the elements.
For higher frequencies, the movement of those individual elements has a deleterious effect on operation of the phased array antenna. For the best performance, element motion of the reflective surface should be detected and compensated.
Sonars also typically use arrays, either planar or linear. Accurate motion data is crucial to the use of Time Difference of Arrival (TDOA) for Angle of Approach (AOA) analysis methods. This is particularly important in applications such as towed sonar arrays (
Phased arrays use computer controlled phase shifters to create beams. Nearly undetectable motion at very low levels affects the phase relationships between elements of the phased array. Detection of such relative motion can improve deployment of a variety of fixed, mobile and deployable antennas; solid or mesh reflectors; active or passive arrays; active, passive, acoustic, electromagnetic, and other phenomenon-sensing systems; terrestrial, underwater, and spaceborne apertures; monostatic radars; bistatic radars; multistatic radars; SIMO radar (Single Input/Multiple Output); MIMO radar (Multiple Input/Multiple Output); and sonars, including linear and planar arrays.
Antenna can be a generic term to describe a number of different apertures performing the functions of receiving or transmitting energy across a broad electromagnetic spectrum, from ultraviolet, visible and infrared light, radio frequency from ultra low frequency to the highest frequencies. Antenna reflectors are critical to amplify small signals and direct electromagnetic energy, either transmitting or receiving. Antenna reflectors can designed in many different styles and shapes, including but not limited to isotronic (regular shapes such as circles or squares); anisotronic (irregular shaped antenna); round; rectangular; oblong; or shaped as a three-dimensional object such as a sphere or hemisphere.
Antenna reflector surfaces can be smooth or rough in various embodiments. The antenna reflector surface can include a mesh of open structure with regular or irregularly dispersed elements to focus or reflect energy in a precisely planned manner. These surfaces can be fixed permanently or can be manipulated to alter and thus change their reflective/receptive characteristics. Antennas and antenna reflectors can comprise a plurality of regions wherein each region can have a MEMS IMU coupled thereto. The antenna region MEMS IMUs generate position and attitude data that can be processed by the system processor or controller to precisely control phase sensitivity of the antenna by altering the phase of sensed data or beam transmission signals as previously described herein. These antenna IMUs can be used alone or in combination with sensor array IMUs.
Antenna reflectors can be rigid or flexible, fixed or movable, rigid or semi-rigid. Any undetected and uncompensated movement can induce phase shifts in received and transmitted energy.
Antenna reflectors can be assembled once and never disassembled or can be designed to deploy or open from a stowed, inoperable position into an open, operational position. The deployment action can be performed via actuators powered by methods to include, but not limited to, hydraulic; pneumatic; pyrotechnic; gas generator; chemical processes; or electromechanical or mechanical (e.g., spring, torsion bar, elastic contraction, etc.) internal power. The deployment mechanism can include passive methods such as external aerodynamic methods (e.g., using forward motion of an aircraft to extract or open an antenna); hydrodynamic methods (e.g., using forward motion of a ship or submarine to extract or open an antenna), or mechanical properties such as spring tension or material memory.
A closed or stowed, furled, rolled, folded or coiled antenna or an antenna otherwise stowed in an non-operational state can be deployed by methods including but not limited to extending ribs like an umbrella, unfurling compressed flexible ribs, or extruding the coiled and compressed antenna via screws or other extension mechanisms.
As an example, one of the more complex and largest deployable antenna types is described in U.S. Pat. No. 5,990,851 entitled “Space Deployable Antenna Structure Tensioned by Hinged Spreader-Standoff Elements Distributed Around Inflatable Hoop”, the entire contents of which is incorporated herein by reference. The deployable antenna described therein is an example of using a complex mechanism to achieve several objectives including fitting a large area structure in a small volume, reliable and precise deployment, achieving a high degree of precision in ‘flatness’ (usually measured in roughness), high stiffness with light weight, and very light non-payload deployment mechanism elements (i.e., the non-operational aspects of the antenna once deployed). In some embodiments of the mesh style antenna described therein the elements can be built of materials that are highly thermally stable.
The use of precise, small, low power 6 DOF (or higher DOF) MEMS IMUs on these antennas is important because of their ability to measure precisely angular and linear acceleration. Such measurements are important in characterizing the performance of the antenna in research, development, manufacturing, deployment, operation, stability, movement, and deterioration.
Motion detection methods described herein are pertinent to fixed, mobile and deployable antennas, solid or mesh reflectors, active or passive arrays, active, passive, acoustic, electromagnetic, and other phenomenon-sensing systems, terrestrial, underwater, and spaceborne apertures, monostatic radars, bistatic radars, multistatic radars, MIMO radar, and SIMO radar.
One very important area for the application of 3DS MEMS to antenna surfaces arises when the aperture, i.e., the antenna area, goes from a rigid unibody reflective surface to a collection of reflective elements of a single antenna aperture integrated over time, often with techniques termed Synthetic Aperture, yet still a monostatic system, or a pseudo-monostatic system (i.e., one in which the actual transmit and receive apertures are separate but at a trivial distance such that they are close enough to be considered a single system for signal processing purposes).
The next embodiment relates to bistatic radars, in which the transmit and receive apertures are separated by a non-trivial distance. Again, the motion detection of the gross and finite elements of the receive aperture approximate a single system.
Time Difference of Arrival (TDOA) is a method of determining the Angle of Approach (AOA) of an incoming wave, which can be acoustic, radio frequency, or light. As shown in
The MEMS die is inherently highly radiation proof and thus is well suited for spacecraft applications such as satellites. The ASIC can be replicated in radiation-hard material with radiation-hard design practices to make a space-qualified 3DS MEMS. Motion data from the 3DS MEMS can be processed at full data rate or at a sampling rate, allowing edge processing and reporting at low data rates. Both approaches will provide useful data.
Transmit and receive modules are critical to many types of advanced Synthetic Aperture Radar (SAR) and Inverse (ISAR) systems. SAR and ISAR systems are highly dependent on absolute movement, i.e., motion of the entire system, and relative motion (motion of the elements of the antenna in relation to each other), which can be measured by sensing rotational acceleration (measured by gyroscopes) and linear acceleration (measured by accelerometers) as described herein. A single 6 Degrees of Freedom (6DOF) MEMS IMU includes 3 gyroscopes and 3 accelerometers, for example.
An important aspect for large sensor arrays relates to “lever arm”—the distance between any element in motion and the center of the IMU. Placing the MEMS at the T/R module, for example, makes the moment arm negligible.
The 3DS MEMS is inherently resistant to high power radiation and temperature, which can be the environment of a T/R module. This design provides a high degree of accuracy in the small space dictated by the design of high power, high frequency T/R modules.
Placing 3DS MEMS IMUs in each T/R module provides gross and fine position and movement data. For example, the Canadian RadarSat (
The impact of the distances between elements of the motion detection elements (e.g., GPS, IMU) and the theoretical center of the antenna and the actual discrete areas of the antenna is important. In this invention the lever-arm is such that any precise calculation of positioning and navigation data using exterior input such as satellite data from a Global Navigation Satellite System (GNSS) such as GPS along with IMUs, require that the lever-arm must be precisely measured.
Current practice is to use a single solution wherein a typical satellite, aircraft or ship uses both GPS and IMU information. The lever-arm between those units, and between them and the antenna aperture must be carefully calculated. In state of the art practice today, the center of the antenna aperture is used to approximate the motion for the entire aperture. The lever-arm is defined as the perpendicular distance from the fulcrum of a lever to the line of action of the effort or to the line of action of the weight.
All radar techniques require detailed knowledge of motion and compensation. Additional techniques beyond SAR and ISAR include Interferometric SAR (InSAR) in which two separate SAR images are taken from two different tracks. This and other types of advanced processing place a premium on precise motion data for compensation.
Another type of imaging radar for spacecraft or aircraft is bistatic (e.g., using one platform to transmit, or “forescatter” RF energy, and a second one to receive the backscattered, or reflected energy.) In the extreme example of the RadarSat in
The figures illustrate only an exemplary embodiment of the invention and are, therefore, not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments and equivalents thereof. The scope of the claims should not be limited by the preferred embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
The present application is a continuation-in-part of International Application No. PCT/US2017/015393, filed on Jan. 27, 2017, which is a continuation-in-part of U.S. patent application Ser. No. 15/206,935, filed Jul. 11, 2016, and claims priority to U.S. Provisional Patent Application No. 62/288,878, filed Jan. 29, 2016, the contents of all of these applications being incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62288878 | Jan 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2017/015393 | Jan 2017 | US |
Child | 16046764 | US | |
Parent | 15206935 | Jul 2016 | US |
Child | PCT/US2017/015393 | US |