There are many potential applications for rapid non-contact quantitative distance measurement/mapping in autonomous vehicles (such as self-driving cars), robotic vision, virtual/augmented reality displays, 3D scanning and printing, and biomedical applications, among many others. Previous and current generation light detection and ranging (LiDAR) systems developed for automotive sensing seek centimeter- to decimeter-scale resolution over tens to hundreds of meters range, for which pulse-echo time-of-flight imaging using direct amplified detection is sufficient and which currently dominates the landscape. Although such systems achieve near video resolution and update rate, the direct detection approach limits the detection sensitivity and dynamic range and thus makes these systems sensitive to low reflectance objects and bright ambient light.
LiDAR systems are examples of a broader class of modern 3D cameras, often denoted RGB-D (red-green-blue+depth) cameras, which typically provide video-rate room-scale imaging with centimeter-scale resolution. Multiple RGB-D camera technologies exist, including those based on triangulation from stereo imaging, structured light imaging, various forms of photogrammetry, and LiDAR (both with and without RGB information). While RGB-D cameras have become the de facto standard in 3D perception for room-scale applications such as robotic navigation, they exhibit resolution and reliability limitations that have undermined their relevance in more demanding applications, such as recognition of facial features for security or operation of robots in close proximity to humans. Whereas centimeter-scale resolution suffices for a robot navigating through an open environment (e.g., a house or factory), the smaller task space and delicacy of medical or personal care applications of robots demand higher precision solutions with millimeter-scale resolution. Thus, current RGB-D cameras are restricted to localization of large objects (such as routes to be navigated or boxes to be loaded) or gross anatomic features (such as the location of faces or limbs) which effectively excludes their use in procedures that necessarily work at much smaller scales. Furthermore, operation of current RGB-D cameras in non-ideal imaging conditions frequently yields 3D data with voids, due to occlusion, or spurious depth measurements, due to sensing limitations.
Thus, there exist a need for more effective distance measurements which have higher resolution, higher sensitivity, better immunity from optical interference, and less artifacts due to voids, occlusion, and spurious measurements leading to image noise.
Systems and methods for rapid coherent synthetic wavelength interferometric absolute distance measurement are described herein.
A method of rapid coherent synthetic wavelength interferometric absolute distance measurement includes receiving, from an optical system, an image from an object scene of at least two distinct wavelengths of light, each wavelength's light source having a coherence length greater than a desired ambiguity length of the absolute distance measurement, and whose synthetic wavelength in combination provides the desired ambiguity length of the absolute distance measurement: determining either a phase or a magnitude of an interference signal for each wavelength of light measured individually or an envelope of a magnitude of a sum of the interference signal for both wavelengths measured together, for interference between light returning from the object scene and light traversing a separate reference arm (or local oscillator) path of the optical system; and calculating an optical distance to an object in the object scene from the phase or the magnitude of the interference signal for each wavelength of light measured individually or the envelope of the magnitude of the sum of the interference signal for both wavelengths measured together, the absolute distance measurement comprising the calculated optical distance to the object.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Systems and method for rapid coherent synthetic wavelength interferometric absolute distance measurement is described herein.
Current generation LiDAR systems developed for robotic guidance and automotive sensing primarily feature centimeter-scale resolution over tens to hundreds of meter range, for which pulse-echo time-of-flight (ToF) imaging using direct amplified detection is sufficient. In its simplest form, ToF LiDAR sends out picosecond-scale light pulses and collects the reflected pulse from the object, although amplitude-modulated continuous-wave (AMCW) versions are increasingly popular in highly parallel (emitter-detector array) implementations such as are employed in some smart phones and tablets. In either case, the distance of the object is obtained by dividing half of the measured round-trip time (or amplitude modulation phase delay in AMCW) by the speed of light in the medium. The primary limitations of ToF LiDAR are the lower limit of ˜centimeter-scale resolution due to the ˜100 GHz laser modulation required, and extreme sensitivity to ambient light (such as sunlight or other ToF LiDARs nearby) due to the direct detection approach. The key to increasing the resolution below ˜1 centimeter is to take advantage of the much larger optical bandwidth available from tunable laser sources, coupled with coherent detection via low-coherence interferometry which also eliminates sensitivity to any other light source. The resulting technology, termed frequency-modulated continuous wave (FMCW) LiDAR, extends the axial resolution of LiDAR down to the sub-mm scale. In fact, FMCW LiDAR applies exactly the same principle as swept-source optical coherence tomography (SSOCT), a technology developed for biomedical imaging applications, the only difference being that in LiDAR only the peak of the Fourier transform of the laser chirp is saved. This is because in LiDAR the dominant reflection is assumed to arise from the target surface. Thus, LiDAR is a 3D surface imaging technology as opposed to a full volumetric tomography (such as SSOCT) as is more common in medical imaging. One important feature of the present invention is that it removes this inefficiency in current FMCW systems, wherein full depth information for every image pixel is acquired but then mostly discarded. The disclosed technology for rapid coherent synthetic wavelength interferometric absolute distance measurement obtains absolute distance measurements from a minimal number of interferometric measurements necessary to obtain the desired depth range and resolution.
Furthermore, because current-generation FMCW LiDAR systems require a full-range frequency sweep to obtain the target range for each lateral pixel, the throughput (defined as depth voxel acquisition rate) of such systems is equivalent to the laser sweep rate (or A-scan rate in OCT terminology). Due to the limited bandwidth of digitizers and the speed limitations of beam steering using mechanical scanners, previous densely sampled FMCW LiDAR systems have typically suffered from a low 3D frame rate (<1 Hz), which greatly restricts their applications in real-time imaging of dynamic scenes.
Recent work using broadband interferometry such as frequency-modulated continuous range (FMCW) LIDAR has extended the axial resolution of LIDAR down to the sub-mm scale, however full 3D surface imaging at full video resolution still requires several seconds acquisition time. A very attractive requirement for wide deployment room-scale LIDAR would be for rapid meter-scale distance mapping with millimeter-scale resolution, with near photographic lateral resolution and number of resolvable points (i.e., ˜1K horizontalט1K lateralט1K depth pixels=1 Gvoxel) acquired and displayed at or near video rate. The resulting required ˜30 Gvoxel/sec acquisition rate for full 3D voxel acquisition pushes the boundaries of current digitization technology and thus remains cost prohibitive for mass production.
However, for most envisioned applications, 3D surface imaging (rather than full tomographic 3D voxel acquisition) can be used, thus a compressive acquisition technique capable of acquiring ˜1K horizontalט1K lateral points, each with 8- to 10-bits range resolution (thus maintaining ˜1K resolvable depth points) at video rate would greatly reduce the required pixel throughput rate, potentially to <<1 G Voxel/sec depending upon the achievable compression ratio.
Embodiments are disclosed for measurement of distance to the first or predominant reflection of an object at a specific lateral position in a scene (also known as ranging or 1D surface imaging), along a single lateral direction comprising a linear profile of object surfaces in a scene (2D surface imaging), or along two lateral dimensions for mapping the distance to all objects in a scene within a defined field-of-view (3D surface imaging). In this regard, the first dimension is defined as the distance or range to the first or predominant reflection of an object, the second dimension is defined as one of the two dimensions orthogonal to the first dimension, and the third dimension is defined as the other orthogonal lateral dimension.
Using systems and methods described herein, imaging can be achieved on the scale of meters (with resolution on the scale of millimeters) for room-scale applications in robotic vision, augmented reality, object identification, and security; up to tens of meters (with resolution on the scale of centimeters) for applications in construction, object localization and tracking, asset management and defense; and up to hundreds of meters (with resolution on the scale of decimeters) for applications in autonomous vehicles, among many others. The light wavelengths used for the measurement can be any for which optical components (such as lasers, beamsplitters, fiber optics, and receivers) are available, although the near-infrared regions near 1.3 or 1.5 micrometers wavelength are preferable for eye safety, and the 900-1000 nanometer region is preferable for the availability of inexpensive array detectors. In addition, the principle of operation described herein is broadly applicable to other electromagnetic radiation, even for wavelengths for which optical components are not available and other types of sources and detectors are used (e.g., radar and microwaves).
Synthetic wavelength interferometry involves the extension of the ambiguity length through using two separate narrow band light fields with different center wavelengths, where the wavenumber separation corresponds to the synthetic wavelength (e.g., Λ12). This enables imaging on the scale of meters or even longer, while making possible resolution on the scale of millimeters or even less.
The controller 120, which may be embodied as described with respect to controller 1000 of
The computing device 130, which may be embodied as described with respect to computing device 1050 of
In some cases, controller 120 and computing device 130 are embodied as part of a same device for both control of the optical system 110 and evaluation of the output of the optical system 110. Therefore, while two components are shown in the Figure, a single device may be used.
System 100 can be implemented in vehicles (e.g., for autonomous vehicles) as well as for room-scale applications in robotic vision, augmented reality, object identification, and security.
Method 200 further includes determining (220) either the phase or the magnitude of the interference signal for each wavelength measured individually (“phase-based approach” or “magnitude-based approach”) or the envelope of the magnitude of the sum of the interference signal for both wavelengths measured together (“envelope of magnitude-based approach”), for interference between light returning from the object scene and light traversing a separate reference arm (or reference oscillator) path of known optical pathlength of an optical system (e.g., optical system 110 of
The optical system can have an interferometer configuration (including bulk-optic and fiber-optic interferometers) and sample and reference arm configurations (including light collimation/focusing/polarization optics) such as described herein with respect to
Method 200 further includes calculating (230) an optical distance to an object in the object scene from the phase or the magnitude of the interference signal for each wavelength measured individually or the envelope of the magnitude of the sum of the interference signal for both wavelengths measured together. The formulas (which are described in more detail in the theoretical evaluation section) for optical distance Δz that can be used include for a phase-based approach:
The formulas (which are described in more detail in the theoretical evaluation section) for optical distance Δz that can be used include for a magnitude-based (and in the third formula for the envelope of the magnitude-based) approach:
In some cases, method 200 further includes repeating (240) the receiving 210, detecting 220, and calculating 230 with two or more additional wavelengths whose synthetic wavelength in combination provides a higher resolution distance measurement than the previous measurement.
In some cases, method 200 further includes combining (250) the distance measurements from several combinations of synthetic wavelength to provide a high-resolution distance measurement over a long measurement range. The combining 250 can be performed using, for example, a binary successive approximation (see e.g., theoretical evaluation section).
As mentioned above, phase-based, magnitude-based, and envelope of magnitude-based approaches, optionally in combination with a binary successive approximation approach for resolution improvement and range extension, can be used to determine a distance measurement from detected signals of an optical system. Various optical systems may be used to achieve rapid coherent synthetic wavelength interferometric absolute distance measurement through these approaches, including the optical circulator-based non-quadrature detection optical systems described with respect to
In a broad application to other forms of electromagnetic radiation, for example, using other types of sources and detectors, an example method can include receiving, from a sensing system, data from an object scene of at least two distinct wavelengths of radiation; determining either a phase or a magnitude of an interference signal for each wavelength measured individually or an envelope of a magnitude of a sum of the interference signal for both wavelengths measured together, for interference between radiation returning from the object scene and radiation traversing a separate reference arm/local oscillator path of the sensing system; and calculating a distance to an object in the object scene from the phase or the magnitude of the interference signal for each wavelength measured individually or the envelope of the magnitude of the sum of the interference signal for both wavelengths measured together, the absolute distance measurement comprising the calculated distance to the object. The formulas presented herein can be applied in this scenario and used in the calculating steps.
Traditional methods for coherent interferometric quadrature detection for optical metrology and profilometry typically involve either stepping or sweeping the pathlength difference between reference and sample paths of the measurement interferometer, typically by multiple sequential steps of π/2 phase delay, using a piezoelectric mounted reflector or fiber stretcher or other means for differential modulation the two pathlengths. These historic methods are also applicable for quadrature detection in the present invention: however, the sequential nature of these approach increases the measurement time and optical system stability requirements so they are likely not optimal for high-speed coherent synthetic wavelength interferometric absolute distance measurement.
Alternatively, optical hybrid and polarization encoding-based optical systems are suitable for simultaneous phase-separated quadrature detection, providing for very high measurement speed. Optical hybrids based on integrated- or micro-optics technology (such as the Six-Port 90-Degree Optical Hybrid product available from OptoPlex Corporation (Fremont, California)) are available with intrinsic π/2 phase separation for instantaneous optical quadrature detection, for example, as illustrated in
Any of these quadrature detection technologies can be either simultaneously or sequentially at multiple wavelengths, particularly for very closely spaced wavelengths (i.e., within a few nm) such as are necessary for synthetic wavelength-based imaging with millimeter-scale resolution and meter-scale ambiguity range. As such, they can be employed for either phase- or magnitude-based detection at n wavelengths λ1 . . . λn, where each sequential wavelength separation increases by a factor of 2 for n-bit depth Δz, measurement. This can be done either sequentially, or at pairs of wavelengths λ1+λ2, λ1+λ3, λ1+λ4 . . . simultaneously for the envelope of magnitude-based approach. The sequential approach can be implemented with a single laser capable of rapid tuning to n wavelengths with such power of 2 wavelength separation (e.g., a random access laser), followed by phase-based or magnitude-based analysis using the methods described herein. Detection at the same pairs of wavelengths simultaneously, however, requires either two lasers (with the first fixed at λ1, the second one rapidly tuned to the other (n−1) power of 2 wavelengths, and their outputs summed together), or a single laser which can output the rapidly tuned required pairs of wavelengths.
Light from the dual laser sources 302-A, 302-B is combined by the first fiber coupler 304-A where it is also split into sample and reference paths. Light in the sample arm is directed into the port I of the fiber optic optical circulator 306, whereupon it is sent out port II towards the object 316. Upon exiting the port II fiber of the circulator 306, the sample light is collimated or focused (by collimating lens 308) in the neighborhood of the sample surface of the object scene and laterally scanned in 1 or 2 lateral directions by the optical scanner 310, which can be any suitable scanning mechanism such as some combination of galvanometer scanners, resonant scanners, MEMS scanners, rotating polygonal mirrors, acousto-optic scanners, or any other lateral light scanning technology.
Light reflected from the sample of the object scene 316 is focused back into the fiber leading to port II of the optical circulator 306, which directs it to port III whereupon it is combined with light transmitted through the transmissive reference arm in the second fiber coupler. Light in the reference arm is optionally modulated by a phase modulator (PM) 312 as needed for direct detection of the envelope of the magnitude of the dual-wavelength interferogram using low-pass filtering of a rectified, modulated interferometric signal as detailed in the Theoretical Evaluation section. The phase modulator 312 may be placed in either sample or reference arm (as illustrated), in fact sufficient signal modulation required for envelope detection may be derived from other sources such as unavoidable motion of the interferometer arm fibers, the sample object itself, or slight modulation of the wavelength of either laser source by, for example rapid current or temperature modulation of the source laser diode. The second fiber coupler is designed for 50/50 power splitting so that equal powers of combined light returning from the sample and reference arm are incident on the high-speed balanced differential optical receivers. With this configuration, interfering light impinging on each detector (of receiver 314) will have a phase shift of 0° (D1) and 180° (D2) between sample and reference arm optical paths, as needed for differential detection of the dual-wavelength summed interferogram Ĩ12 as required for direct envelope detection of the envelope of the dual-wavelength interferogram.
The directional polarization gate 356 is able to perform the same optical function as the fiber optic circulator, but with better isolation performance (defined as any light which leaks from port I to port III directly within the circulator) than the fiber-optic circulator. Even if the circulator itself has excellent isolation performance, a finite reflection from the angled tip of the fiber emerging from port II of a circulator can have as much as ˜−50 dB reflectivity, which creates the same effect as poor isolation and can be enough to overwhelm the light power reflected from the sample object, thus rendering homodyne detection impossible. To correct this issue, in
The quarter-wave plate 358 will generate circular polarization of the light incident on the object scene 370, and upon passing again through the quarter wave plate 358 on the return path, the reflected light will be linearly horizontally polarized and will thus pass through the PBS 356, re-focused into the remaining sample arm fiber, and differentially detected (see also description of
A difference between the configuration illustrated in
The laser source 602 is assumed unpolarized, and the source light is split into sample (e.g., of scene 612) and reference (e.g., reference 614) paths using NPBS 604-B. Light propagating (e.g., through an eighth-wave birefringent plate 616) in the reference arm 614 with vertical polarization axis is delayed by 90° (round trip) relative to horizontally polarized reference arm light. Light in the sample arm 612 is collimated or focused (for increased light collection efficiency, not shown) in the neighborhood of the sample surface of scene 612 and laterally scanned in 1 or 2 lateral directions by scanner 606, which can be any suitable scanning mechanism such as some combination of galvanometer scanners, resonant scanners, MEMS scanners, rotating polygonal mirrors, acousto-optic scanners, or any other lateral light scanning technology. Light reflected from the sample of scene 612 is combined with light returning from the reference arm 614 in the NPBS 604-B and directed (e.g., via NBPS 604-A and mirrors) to a pair of polarization-separated (e.g., via PBS 608-A, 608-B) high-speed balanced differential optical receivers. D1-D3610-A and D2-D4610-B. With this configuration, interfering light in matching polarization states impinging on each detector will have a phase shift of 0° (D1). 90° (D2). 180° (D3), and 270° (D4) between sample 612 and reference arm 614 optical paths, as needed for the four phase-stepping quadrature measurements. Thus, the output of differential receiver 610-A provides I10-I1180, and the output of differential receiver 610-B provides I190-I1270, as required by the provided formulas for phase-stepped phase-based and magnitude-based calculation of the interferometer pathlength difference at each wavelength. In some cases, optical circulators or a transmissive reference arm can be used to improve the topological efficiency of the illustrated configuration.
In the illustrated implementation, fibers prior to the fiber PBSs 708-A, 708-B are non-polarization-maintaining (non-PM) fibers, and the fibers emerging from the fiber PBSs 708-A, 708-B are PM fibers. Since non-PM fibers can have birefringence depending upon their specific length and bending-induced stress, fiber polarization controllers 718-A, 718-B, 718-C are included (shown as sets of three sequential fiber loops in
Referring to
Here, the laser source 802 is unpolarized, and the source light is split into a sample (e.g., of scene 812) and a transmissive reference arm path using NPFS 804-A. Light in the sample arm 812 is collimated or focused (e.g., using lens 816, for increased light collection efficiency) in the neighborhood of the sample surface of scene 812 and laterally scanned in 1 or 2 lateral directions by optical scanner 806, which can be any suitable scanning mechanism such as some combination of galvanometer scanners, resonant scanners. MEMS scanners, rotating polygonal mirrors, acousto-optic scanners, or any other lateral light scanning technology. Light reflected from the sample of scene 812 is combined with light in the transmissive reference arm path in the NPFS 804-B and directed to a pair of polarization-separated (e.g., via fiber PBS 808-A, 808-B) high-speed balanced differential optical receivers. D1-D3810-A and D2-D4810-B. With this configuration, interfering light in matching polarization states impinging on each detector will have a phase shift of 0° (D1), 90° (D2), 180° (D3), and 270° (D4) between the paths, as needed for the four phase-stepping quadrature measurements. Thus, the output of differential receiver 810-A provides I10-I1180, and the output of differential receiver 810-B provides I190-I1270, as required by the provided formulas for phase-stepped phase-based and magnitude-based calculation of the interferometer pathlength difference at each wavelength. Fiber polarization controllers 818-A, 818-B, 818-C are also included (shown as sets of three sequential fiber loops in
Although the implementations in
In this implementation, light from the laser source 902 illuminates both the reference reflector 910 and the sample scene 912, both of which are imaged (with magnification −fcamera/freference and −fcamera/fsample, respectively) onto a line-scan or area-scan camera (e.g., camera 908). The reference arm reflector 910 is tilted a small angle such that when imaged onto the detector 906, given the detector pixel spacing, the differential phase delay between reference and sample beams as described in
For off-axis holography using an area detector, all lenses depicted in
As explained above, it is desirable to have an acquisition technique capable of acquiring ˜1K horizontalט1K lateral points, each with 8- to 10-bits range resolution (thus maintaining ˜1K resolvable depth points) at around video rate, or around 1K×1K×1 byte×30≈30 MB/sec data throughput rate. Using modern optical communications-grade equipment/components in combination with the binary successive approximation approaches described herein, this should be readily achievable.
Components of computing device 1050 may represent an imaging system, a personal computer, a mobile device, a wearable computer, a smart phone, a tablet, a laptop computer, or other computing device. Accordingly, more or fewer elements described with respect to computing device 1050 may be incorporated to implement a particular computing device.
Computing device 1050 can include at least one processor 1055 connected to components via a system bus 1060: a system memory 1065 and a mass storage device 1070. Examples of processor 1055 include general purpose central processing units (CPUs), graphics processing units (GPUs), field programmable gate arrays (FPGAs), application specific processors, and logic devices, as well as any other type of processing device, combinations, or variations thereof.
It can be understood that the mass storage device 1070 may involve one or more memory components including integrated and removable memory components and that one or more of the memory components can store instructions 1075 for performing a method of rapid coherent synthetic wavelength interferometric absolute distance measurement such as described with respect to method 200 of
Examples of mass storage device 1070 include removable and non-removable storage media including random access memory, read only memory, magnetic disks, optical disks, CDs, DVDs, flash memory, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other suitable storage media. Mass storage device 1070 does not consist of propagating signals or carrier waves.
The system memory 1065 may include a random-access memory (“RAM”) and/or a read-only memory (“ROM”). The RAM generally provides a local storage and/or cache during processor operations and the ROM generally stores the basic routines that help to transfer information between elements within the computer architecture such as during startup.
The system can further include user interface system 1080, which may include input/output (I/O) devices and components that enable communication between a user and the computing device 1050. User interface system 1080 can include one or more input devices such as, but not limited to, a mouse, track pad, keyboard, a touch device for receiving a touch gesture from a user, a motion input device for detecting non-touch gestures and other motions by a user, a microphone for detecting speech, and other types of input devices and their associated processing elements capable of receiving user input. The user interface system 1080 may also include one or more output devices such as, but not limited to, display screen(s), speakers, haptic devices for tactile feedback, and other types of output devices. In certain cases, the input and output devices may be combined in a single device, such as a touchscreen display which both depicts images and receives touch gesture input from the user.
The network interface 1085 allows the system to communicate with other computing devices, including server computing devices and other client devices, over a network.
Certain techniques set forth herein may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computing devices. Generally, program modules include routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types.
Embodiments may be implemented as a computer process, a computing system, or as an article of manufacture, such as a computer program product or computer-readable medium. Certain methods and processes described herein can be embodied as code and/or data, which may be stored on one or more computer-readable media. Certain embodiments of the invention contemplate the use of a machine in the form of a computer system within which a set of instructions, when executed, can cause the system to perform any one or more of the methodologies discussed above. Certain computer program products may be one or more computer-readable storage media readable by a computer system and encoding a computer program of instructions for executing a computer process.
It should be understood that as used herein, in no case do the terms “storage media,” “computer-readable storage media” or “computer-readable storage medium” consist of transitory carrier waves or propagating signals. Instead, “storage” media refers to non-transitory media.
The detected signal intensity from an optical interferometer, or optical system, such as illustrated in
Various methods for coherent detection are available which consist in obtaining multiple phase-separated or phase-swept measurements of I1 such that ϕ1 may be solved (independent of A and B, but modulo ±π). These methods form the basis for the industry of nanometer-scale metrology of surfaces, however such measurements suffer from phase wrapping if object surface variations exceed λ0/2 (the so-called measurement “ambiguity length”) due to the periodicity of the inverse cosine. However, if another interference measurement I2 at another wavelength λ1 is also obtained, the two measurements taken together are characterized by the so-called “synthetic wavelength” Λ12=2π/Δk12=λ1λ2/(λ2−λ1), where Δk12=k1−k2, where kn=2π/λη, λη is a particular wavelength of light, and n is an index representing distinct wavelengths of light. Importantly, the ambiguity length associated with this measurement is also extended to Λ12/2, so a proper choice of (λ1, λ2) can lead to arbitrary unambiguous target distance measurement, so long as each of λ1 and λ2 has sufficiently narrow linewidth so that their corresponding coherence length is also at least Λ12. For example, for Λ12=1 m, λ1−λ2≈1 pm and both (λ1, λ2) must have ≤1 pm or ≤300 MHz linewidth (assuming operation at the 1.5 μm optical communications wavelength).
The illustrated approaches for rapid coherent synthetic wavelength interferometric absolute distance measurement can utilize coherent quadrature detection, for example, via phase stepping coherent quadrature detection and phase sweeping coherent quadrature detection. In general, coherent quadrature detection consists in obtaining measurements proportional to the detected interferometric signal in “quadrature”, i.e., separated by 90 degrees, often referred to as the real and imaginary components of the interferometric signal. Once the real and imaginary components are obtained, Eq. 1 may be solved for parameters useful for absolute distance measurement such as B and ϕ1, assuming A is consistent between the measurements. However, since both A and B included detector-specific factors, and since high-speed measurements are sensitive to optical noise sources such as excess noise arising from non-ideal light sources, a more common approach for high-speed and high-sensitivity coherent quadrature detection is to utilize an interferometer topology which allows for differential measurement of each of the real and imaginary components of the interferometric signal, obtained from four fixed π/2 radian separated measurements of I1 obtained from two balanced differential detectors (see e.g., various optical systems shown herein). However, it should be understood that the principles disclosed here for rapid coherent synthetic wavelength interferometric absolute distance measurement may equally take advantage of many other approaches which are well known in the art of interferometry for detection of the real and imaginary components of the interferometric signal.
Phase stepping coherent quadrature detection consists of methods for acquiring interferometric signal measurements at each of four fixed π/2 radian separated phase steps, either sequentially such as using a stepped piezoelectric actuator, or simultaneously such as with a 90 degree optical hybrid or with polarization phase encoding. For the case of phase stepping coherent quadrature detection, the interferometric measurements obtained are:
These expressions may be solved for the parameters A, |B|, and 2k1Δz as follows:
For the case of an optical system based on a 3×3 fiber coupler, phase stepping coherent quadrature detection requires modified formulae since the three outputs of the coupler provide phase-separated measurements of 2π/3 radians (120 degrees) rather than π/2 radians (90 degrees). In this case, assuming three 120 degree separated interferometric measurements are obtained as I1−120, I10 and I1120, then the relevant formulae for |B| and 2k1Δz are as follows:
Phase sweeping coherent quadrature detection (sometimes referred to as “integrating buckets” consists of methods for acquiring interferometric signal measurements while sweeping the pathlength difference over π/2 during each acquisition. This can also be accomplished either sequentially such as using a continuously moving piezoelectric actuator, or simultaneously such as with off-axis holography, in which case the pathlength difference between reference and sample arms increases continuously across each detector pixel element. For the case of phase sweeping coherent quadrature detection, the interferometric measurements obtained are
These expressions may be solved for the parameters A, |B|, and 2k0k1Δz as follows:
Either the phase stepping or the phase sweeping approaches may be used with the corresponding described optical configurations. It should be understood that these expressions are examples valid only for the simplest phase stepping or phase sweeping approaches using 4 equally spaced steps or sweeps: other approaches using more or less phase steps or sweeps to solve for the parameters A, |B|, and 2k1Δz may also be derived as have been used for alternative optical metrology designs.
As described above, one method of obtaining an absolute distance measurement (e.g., the distance from a known position in the sample arm to the surface or principal reflection from an object in a scene) is a phase-based approach. The phase-based approach uses the phases of the interferometric signal intensity at two separate wavelengths measured separately, using any of multiple available techniques for phase-sensitive interferometry. This can be done with both wavelengths illuminated simultaneously and some means implemented for detecting them separately (such as wavelength selective filtering or differential modulation), or more simply by just illuminating each wavelength and recording the phase values sequentially. In either case, one obtains a phase measurement at each of the separated wavelengths (Eq. 2):
Intuitively, the phase of the interferogram at each wavelength will accumulate 2π for each fringe at that wavelength, such that at the round-trip distance corresponding to the synthetic wavelength Λ12 the interferogram for the shorter wavelength will have accumulated one additional fringe and thus 2π additional (round trip) phase compared to the interferogram at the longer wavelength. Thus, the distance to the reflector can be obtained from the difference between the phases accumulated at each wavelength, scaled by the wavenumber difference (Eq. 3):
However, this results in a phase wrapping artifact whenever (ϕ1−ϕ2)<0, which can be taken care of by asserting the condition (Eq. 4):
Since the phase difference accumulates to 2π over the length of the synthetic wavelength Λ12 the (single pass) pathlength difference “ambiguity length” for the phase-based approach corresponds to (Eq. 5):
Many approaches are possible to obtain the required phase measurements for the phase-based approach at λ1 and λ2. In particular, since each of the component interferograms are at optical wavelengths, avoidance of phase washout due to motion of the sample and/or reference requires that both phase measurements be acquired quickly compared to any motion of the sample or reference on the scale of the optical wavelength. Coherent quadrature detection can be used to achieve this, using modern techniques and available optical components which permit either rapid sequential single-channel phase detection on the ≤nanosecond scale (using optical hybrids or polarization encoding, for example), or rapid parallel multi-channel detection on the ms-μs scale (using off-axis holography with line-scan or rapid area-scan cameras, for example).
Using the coherent quadrature detection approach detailed above, the phase measurements at each of the wavelengths can be obtained using (Eq. 6):
The pathlength difference in the interferometer is expressed explicitly as (Eq. 7):
The phase wrapping artifact solution remains (Eq. 8):
One practical question, particularly for any approach based on measuring optical phase, is the robustness of the distance measurement technique to the presence of noise in the detected signals.
As described above, another method of obtaining an absolute distance measurement (e.g., the distance from a known position in the sample arm to the surface or principal reflection from an object in a scene) is a magnitude-based approach, which detects the magnitude or visibility of the interferometric signal at each of two wavelengths. This approach starts with incoherent summation of the detected intensities detected at the two wavelengths λ1 and λ2 (Eq. 9):
The latter expression is expressed in terms of the mean wavenumber
I12 also represents an interferometric signal with rapid oscillation according to the mean wavenumber
where the “synthetic wavelength envelope amplitude” C12 is given by (Eq. 11):
The synthetic wavelength envelope amplitude becomes the measurement of interest because it has an extended ambiguity length satisfying Δk12Δzamb,env=2π, or Δzamb,env=Λ12 (Eq. 12).
If the synthetic wavelength envelope amplitude C12 can be measured experimentally, then the pathlength difference between the reference and a given position on the sample Δz can be extracted directly from inversion of Eq. 11, since Δk12 is known. It is highly desirable that this measurement be independent of A and B, since those variables depend upon details of reference and sample reflectivity, both of which may be both noisy and the latter of which is likely to rapidly vary as a function of position on the sample. Eq. 9 also assumes nearly monochromatic light sources and matching polarization states between the interferometer arms: if these assumptions are relaxed (i.e., for reduced temporal coherence), the form of Eq. 10 still holds with the variables A and B expected to be nearly identical for closely spaced wavelengths λ1 and λ2.
Quantitative ranging with extended ambiguity length on the order of the synthetic wavelength can be accomplished through a combination of quadrature detection of the dual-wavelength interferogram values, and ratioing the resulting measured amplitude to that of a separate measurement with known (e.g., unity) envelope amplitude. The required quadrature measurements are (Eq. 13):
Note that even though the synthetic wavelength envelope amplitude C12(Δk12,Δz) is also a function of Δz, it is not necessary to include phase steps in Δz in the above equations on the scale of π/2 at the mean wavenumber
Combining these multiphase interferometric expressions, an expression is obtained which contains the magnitude of the synthetic wavelength envelope (Eq. 14):
To eliminate the remaining factor of B, note that for any one of the constituent measurements at a single wavelength I1 or I2, the visibility for a single wavelength interference measurement is unity since for Δk11=k1−k1=0, C11(Δk11, Δz)=cos(2·0·Δz)=1. Thus, at either single wavelength, it is possible to make the same multi-phase measurement to obtain B (Eq. 15):
Finally, it is possible to solve for the unknown path length difference Δz by combining Eqs. 11, 14, and 15 as (Eqs. 16 and 17):
Since with this measurement, only the magnitude (rather than the amplitude) of the synthetic wavelength envelope is obtained, the ambiguity length of the unknown pathlength difference is reduced by an additional factor of 4 corresponding to the first quarter of the synthetic wavelength envelope cosine corresponding to the principal value of Eq. 17 and is given as (Eq. 18):
The preceding describes measurement of the synthetic wavelength magnitude |C12| over its extended ambiguity length Λ12/4. The sensitivity of the measurement as a function of range will be proportional to the z derivative of |C12|. Assuming |C12| is of the form |C12(Δk12,Δz)|=| cos(Δk12Δz)|, the sensitivity will be of the form (Eq. 19):
which indicates that the sensitivity will be least near z=0, and greatest near z=Λ12/4.
Using quadrature detection for the magnitude-based approach has the advantage of very straightforward formulation as shown above, as well as supporting a straightforward hardware implementation. However, quadrature detection still requires phase stability for the required quadrature measurements at both wavelengths. To avoid this drawback, it would be desirable to detect the synthetic wavelength envelope shape C12(Δk12,Δz) in Eq. 10 directly, without requiring phase stability at the mean wavenumber
Direct detection of the envelope of the magnitude of the dual-wavelength interferogram again starts with incoherent summation of the detected intensities detected at the two wavelengths λ1 and λ2 (Eq. 9), however we assume the factor A (the DC component) has already been removed through balanced differential detection such as illustrated in
As is well known from electronics (for example, a simple amplitude-modulated radio receiver), a common means for direct envelope detection consists of low-pass filtering of a rectified, modulated signal. Thus, in
Here, the modulation is not included in the second cosine (corresponding to the synthetic wavelength envelope amplitude C12 because the modulation distance is much smaller than the synthetic wavelength Λ12. Envelope detection of the signal in Eq. 20 then proceeds straightforwardly: the signal is digitized over a few cycles of f, then rectified and low-pass filtered (LPF) at a cutoff frequency less then f (Eq. 21):
Similar to the wavelength ratioing approach described above for the quadrature-detected magnitude-based approach, the value of B may be obtained by simply repeating the same envelope detection procedure with only one of the two wavelengths λ1 or λ2 illuminated, since then (Eq. 22):
Finally, the unknown path length difference Δz is obtained by combining Eqs. 21 and 22 as (Eq. 23):
The described envelope of magnitude-based approach has a substantial advantage over both the quadrature detected phase and magnitude-based approaches of significantly relaxed temporal stability requirements, since small variations in the interferometric phase (due to object motion or wavelength instability, for example) will not substantially effect the described envelope measurement. In fact, the required phase modulation included in Eq. 20 need not be exactly cosinusoidal as described, or even purposefully executed using optical devices. In some cases, such as if there is already expected to be motion of the sample within some known frequency range (such as natural motion of the sample or unavoidable oscillations of the interferometer arms), that modulation alone may suffice for envelope detection. Additionally, the described method for low-pass filtering of the rectified, digitized signal is only one of many potential approaches which could be employed for direct detection of the dual-wavelength interferogram envelope C12(Δk12,Δz). Other potential approaches could include modulation of both the pathlength difference Δz and the detector gain in order to implement lock-in detection, or also various means which have been described for envelope detection in the optical (rather than electronic, post-digitization) domain.
The preceding section assumed that the visibility of the individual interferograms at wavelengths λ1 and λ2, or more specifically the interferometric coefficients for each of the wavelengths, A and B, are identical. For most situations this should be true for λ1 and λ2 separated by a small amount (e.g., a few nm or less), since the reflectivity of sample objects and transmissivity of interferometer components (fiber optics, beamsplitters, etc.) typically vary much more slowly with wavelength than that. However, some situations may arise for which this assumption is not true. In particular, for sample objects which contain multiple axially spaced reflectors, self-interference of sample arm light within the sample may fairly strongly modulate the overall sample reflectivity as a function of wavelength. In this situation, the assumption of identical interferometric coefficients may not hold, and the intensity I1, I2 and the incoherent summation I2 can be given as (Eq. 24):
The effect of allowing different visibility at different wavelengths is the reduction of the modulation of the synthetic wavelength interferogram.
The previous sections describe methods for measurement of an unknown pathlength difference Δz over an extended ambiguity length Δzamb characterized by the synthetic wavelength of dual-wavelength interferometer measurements (Λ12/2 for the phase-based approach, and Λ12/4 for the magnitude-based approaches). As illustrated in
Another method to obtain a range measurement with increased range resolution is to multiplex multiple measurements of the pathlength difference obtained using different synthetic wavelengths. There are multiple ways to approach this, more generally formulated as an optimization problem. A very simple successive approximation approach is provided below as an illustration herein. This approach, while simple, achieves the remarkable goal of providing a measurement with n-bits of depth resolution using only n+1 synthetic wavelength measurements.
A particularly simple method for multiplexing multiple synthetic wavelength measurements for increased range resolution which has similarities to the successive approximation approach used in analog/digital converters is illustrated in
An issue can arise if any of the just described measurements Δz12, Δz13, . . . Δz1n are equal to Λ12/2 to within the measurement sensitivity. The simplest solution to this is simply to disregard that particular range measurement as unreliable, and move on to the next. However, more sophisticated solutions which incorporate principles of error coding from digital communications systems are also applicable.
The above method for multiplexing multiple synthetic wavelength measurements to increase range resolution is particularly straightforward since, according to Eqs. 5 and 7, the detected pathlength difference Δz12 is linear over the full range of the ambiguity length Δz12amb=Λ12/2. However, for the magnitude-based approach, as evident from Eqs. 16-18, since only the magnitude of the synthetic wavelength envelope |C12| is obtained, the ambiguity length Δzamb,mag=Λ12/4 is reduced by a factor of 4, corresponding to the first quarter of |C12(Δk12, Δz)|=| cos (Δk12, Δz)|. In this case, the successive approximation resolution improvement of the pathlength difference estimate Δz can be described based on analysis of the synthetic wavelength envelope |C12| rather than the pathlength difference estimate Δz directly, just to illustrate different ways to attack the problem.
Similar to that described above for the phase-based approach, a goal is to derive an n-bit binary-encoded digital word b=b12b13b14 . . . b1n which describes the distance to the reflector with n bits of resolution. In the current case, a first measurement is made to obtain the synthetic wavelength magnitude |C12| at a pair of wavelengths λ1, λ2 with ambiguity length z=Λ12/4 as described above. The first (most significant) bit b12 of b=b12b13b14 . . . b1n is set to 0 if |C12|>cos (π/4)=0.707, in which case the reflector position is less than z=Λ12/8 and set to 1 if |C12<cos (π/4)=0.707. This encodes whether the sample is within the first half or second half of the first measurement's ambiguity length. Also, a temporary variable B12 is set to −1 if |C12|>0.707, and +1 if |C12|<0.707. Then a second measurement is made to obtain the synthetic wavelength magnitude |C13| at a pair of wavelengths λ1, λ3 which have twice the original wavelength separation, λ3−λ1=2(λ2−λ1), so that this new measurement has ambiguity length z=Λ13/4=z=Λ12/8. Again, this gives a second measurement with twice the range sensitivity, but half the ambiguity length. This new range information can be encoded as the second bit b13 of b=b12b13b14 . . . b1n by setting this bit to 0 if B12·|C13|>0.707 and to +1 if B12·|C13|<0.707. Incorporation of the temporary variable B12 accounts for the fact that |C13| reverses polarity beyond its ambiguity length. Also, another temporary variable B13 is set to −1 if |C13|>0.707, and +1 if |C13|<0.707. This process is then repeated n times, where n is the desired bit depth of the range measurement. For each successive bit bin, that bit is set to 0 if
As mentioned above, an issue arises if any of the just described measurements |C12| |C13| |C14| . . . |C1n| are equal to 0.707 to within the measurement sensitivity. A simple solution to this issue is to disregard that particular range measurement as unreliable, and move on to the next. However, more sophisticated solutions which incorporate principles of error coding from digital communications systems are also possible.
For a laser-based LIDAR system to be widely utilized, it must not pose an eye safety hazard to users or bystanders. The most popular wavelengths for LIDAR systems are currently λ˜900 nm, and the principal optical communications wavelength region at λ˜1.5 μm. Of these choices, λ˜1.5 μm is much more eye safe, since the light is absorbed over much of the depth of the eye. Thus, the ANSI standard for maximum permissible exposure for cw light at λ=1.5 μm over long exposure times (i.e., 10 to 3×103 seconds) is 9.6 mW, which is also the Class I laser classification limit (below which no special safety precautions need to be taken) for a λ=1.5 μm cw point laser source. The ANSI MPE for the same long exposure time for λ=900 nm is 0.61 mW, more than an order of magnitude less. Other approaches may be taken to mitigate the increased safety hazard at λ˜900 nm, such as taking special precautions to regulate the beam divergence, exposure time, or viewing distance from such sources, however these are in general more difficult and costly than simply using a Class Iλ˜1.5 μm source. One current drawback of using λ˜1.5 μm wavelength for LIDAR is that currently available detectors are InGaAs detectors, rather than the much more plentiful and inexpensive Si detectors usable below λ˜1000 nm.
The signal-to-noise ratio expression for coherent LIDAR can be given as (Eq. 25)
Here ρ is the receiver responsivity, PS is the power incident on the sample, RS is the power reflectivity of the sample (including any non-idealities in the sample optical system light collection efficiency), e is the electronic charge, and B is the detection bandwidth (here approximated by the inverse of the acquisition time Δt).
The simulations provided
This result indicates that at least for measurement at λ˜1.5 μm, assuming shot-noise limited detection is achieved, that sample reflectivity even as low as 10-6 still allows sufficient SNR for reliable single-bit depth measurement time with ns-scale acquisition time, or 10-bit multiplexed depth measurements at >100M Hz. It should be noted that the NA available for LIDAR indicates that detection inefficiencies can restrict certain operations.
Performance of the phase-resolved measurements required for coherent quadrature detection using either the phase-based or magnitude-based approaches described above also places limits on the required acquisition time due to potential sample (or reference) motion during acquisition. The phenomenon of phase washout requires that 2k0Δz remain small compared to 2π for the duration of the phase measurement (Eq. 27):
The duration of the acquisition time is in turn limited by the allowable velocity of the sample, such that (Eq. 28):
As can be seen in the last expression of Eq. 28, the acquisition rate (1/Δt) assumes to λ0=1 μm, which is well within the shot noise acquisition time limits imposed by laser safety considerations as explained in the previous section, even for highway automobile velocities (100 miles/hour≈40 m/s, so 1/Δt=80 MHz) and a Class I laser at λ0=1 μm. Currently, single-channel detectors can operate at sufficient speeds for capturing the information. For room-scale LIDAR with much more stable subjects (e.g., velocity˜1 mm/s), then (1/Δt=20 kHz), which is definitely achievable using line-scan cameras (which can readily go up to 10× faster), and even fast area-scan cameras.
It should be understood that recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. For example, if a range is stated as 1% to 50%, it is intended that values such as 2% to 40%, 10% to 30%, or 1% to 3%, etc., are expressly enumerated in this specification. These are only examples of what is specifically intended, and all possible combinations of numerical values between and including the lowest value and the highest value enumerated are to be considered to be expressly stated in this disclosure.
The functional block diagrams, operational scenarios and sequences, and flow diagrams provided in the Figures are representative of exemplary systems, environments, and methodologies for performing novel aspects of the disclosure. While, for purposes of simplicity of explanation, methods included herein may be in the form of a functional diagram, operational scenario or sequence, or flow diagram, and may be described as a series of acts, it is to be understood and appreciated that the methods are not limited by the order of acts, as some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a method could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation.
Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.
This application claims the benefit of U.S. Provisional Application Ser. No. 63/213,413, filed Jun. 22, 2021.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/034597 | 6/22/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63213413 | Jun 2021 | US |