COMBINING DATA FROM DIFFERENT SAMPLE REGIONS IN AN IMAGING SYSTEM FIELD OF VIEW

Information

  • Patent Application
  • 20240085559
  • Publication Number
    20240085559
  • Date Filed
    September 14, 2022
    2 years ago
  • Date Published
    March 14, 2024
    9 months ago
Abstract
The imaging system includes one or more cores. Each of the cores outputs a system output signal that illuminates multiple sample regions in a field of view. A subject one of the cores includes a light combiner that generates a composite signal beating at a beat frequency. Electronics use a value of the beat frequency to calculate multiple different possible LIDAR data solutions for a subject one of the sample regions illuminated by the system output signal output from the subject core. Each of the possible LIDAR data solutions includes a comparative component that indicates a value of a radial velocity between the LIDAR system and an object in the subject sample region. The electronics identify a correct one of the LIDAR data solutions by comparing the LIDAR data solutions to data calculated for one or more reference sample regions selected from among the sample regions. The one or more reference sample regions are different from the subject sample region.
Description
FIELD

The invention relates to imaging systems. In particular, the invention relates to LIDAR systems.


BACKGROUND

There is an increasing commercial demand for 3D imaging systems that can be economically deployed in applications such as ADAS (Advanced Driver Assistance Systems) and AR (Augmented Reality). Imaging systems such as LIDAR (Light Detection and Ranging) are used to construct a 3D image of a target scene by illuminating the scene with laser light and measuring the returned signal.


Many LIDAR approaches transmit a system output signal. The system output signal is reflected by an object and a portion of the reflected light signal returns to the LIDAR chip as a LIDAR input signal. The LIDAR input signal is processed by electronics so as to determine the distance and radial velocity between the LIDAR system and the object. Because the LIDAR input signal is generally an analog signal but the processing is performed on digital signals, the LIDAR system often includes one or more an Analog-to-Digital Converters (ADC) for converting a light signal that includes light from the LIDAR input signal to a digital signal. The Analog-to-Digital Converters (ADC) become a significant expense in the commercialization of LIDAR systems.


For the above, reasons there is a need for a LIDAR system having reduced costs and complexity.


SUMMARY

An imaging system includes one or more cores that each outputs a system output signal that illuminates multiple sample regions in a field of view. A reference one of the cores includes a light combiner configured to generate a composite signal beating at a beat frequency. The system also includes electronics configured to use a value of the beat frequency of the composite signal to calculate a magnitude of a radial velocity indicator for a reference one of the sample regions illuminated by the system output signal output from the reference core. The radial velocity indicator indicates a radial velocity between the LIDAR system and an object in the reference sample region. The electronics are configured to identify a direction of the radial velocity indicator by comparing the magnitude of the radial velocity indicator to data calculated for a subject one of the sample regions. The reference sample region is different from the subject sample region.


Another embodiment of an imaging system includes one or more cores. Each of the cores outputs a system output signal that illuminates multiple sample regions in a field of view. A subject one of the cores includes a light combiner that generates a composite signal beating at a beat frequency. Electronics use a value of the beat frequency to calculate multiple different possible LIDAR data solutions for a subject one of the sample regions illuminated by the system output signal output from the subject core. Each of the possible LIDAR data solutions includes a comparative component that indicates a value of a radial velocity between the LIDAR system and an object in the subject sample region. The electronics identify a correct one of the LIDAR data solutions by comparing the LIDAR data solutions to data calculated for one or more reference sample regions selected from among the sample regions. The one or more reference sample regions are different from the subject sample region.


A method of operating an imaging system includes illuminating multiple sample regions in a field of view with system output signals output from different cores. The method also includes combining light signals so as to generate a composite signal beating at a beat frequency. The method also includes using the value of the beat frequency of the composite signal to calculate a magnitude of a radial velocity indicator for a reference one of the sample regions illuminated by the system output signal output from the reference core. The radial velocity indicator indicates a radial velocity between the LIDAR system and an object in the reference sample region. The method also includes identifying a direction of the radial velocity indicator by comparing the magnitude of the radial velocity indicator to data calculated for a subject one of the sample regions. The reference sample region is different from the subject sample region.


Another method of operating an imaging system includes illuminating multiple sample regions in a field of view with system output signals output from different cores. The method also includes combining light signals so as to generate a composite signal beating at a beat frequency. The method also include using the value of the beat frequency to calculate multiple different possible LIDAR data solutions for a subject one of the sample regions. Each of the possible LIDAR data solutions includes a comparative component that indicates a value of a radial velocity between the LIDAR system and an object in the subject sample region. The method further includes identifying a correct one of the LIDAR data solutions by comparing the LIDAR data solutions to data calculated for one or more reference sample regions selected from among the sample regions. The one or more reference sample regions are different from the subject sample region.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1A is a topview of a schematic of a LIDAR system that includes or consists of a LIDAR chip that outputs a LIDAR output signal and receives a LIDAR input signal on a common waveguide.



FIG. 1B is a topview of a schematic of a LIDAR system that includes or consists of a LIDAR chip that outputs a LIDAR output signal and receives a LIDAR input signal on different waveguides.



FIG. 1C is a topview of a schematic of another embodiment of a LIDAR system that that includes or consists of a LIDAR chip that outputs a LIDAR output signal and receives multiple LIDAR input signals on different waveguides.



FIG. 2 is a topview of an example of a LIDAR adapter that is suitable for use with the LIDAR chip of FIG. 1B.



FIG. 3 is a topview of an example of a LIDAR adapter that is suitable for use with the LIDAR chip of FIG. 1C.



FIG. 4 is a topview of an example of a LIDAR system that includes the LIDAR chip of FIG. 1A and the LIDAR adapter of FIG. 2 on a common support.



FIG. 5A illustrates an example of a signal processor suitable for use with the LIDAR systems.



FIG. 5B provides a schematic of electronics that are suitable for use with a signal processor constructed according to FIG. 5A.



FIG. 5C is a graph of frequency versus time for a system output signal.



FIG. 6A illustrates a LIDAR system that includes multiple different cores on a common support.



FIG. 6B illustrates a relationship between data periods disclosed in FIG. 5C and the field of view for the LIDAR system.



FIG. 6C illustrates objects in the field of view disclosed in the context of FIG. 6B.



FIG. 6D illustrates multiple different field locations in the sample regions of illuminated by a range and velocity core.



FIG. 7 is a two-dimensional illustration of a field of view for a LIDAR system.



FIG. 8 illustrates common electronics configured to receive preliminary LIDAR data generated by different cores.



FIG. 9 shows a subject sample region and potential sample regions in the field of view from FIG. 7.



FIG. 10 is a process flow for generating range data for sample regions illuminated by a velocity core in a LIDAR system that includes range and velocity cores.



FIG. 11A is a topview of a schematic of a LIDAR core suitable for use as a velocity core.



FIG. 11B is an example of a signal processor suitable for use with the LIDAR core of FIG. 11A.



FIG. 11C provides a schematic of electronics that are suitable for use with a signal processor constructed according to FIG. 11B.



FIG. 12A illustrates an example of process flow that the electronics can use to identify a correct LIDAR data solution from among multiple possible.



FIG. 12B illustrates a LIDAR system that includes multiple different real data signal cores on a common support.



FIG. 12C is a two-dimensional illustration of the field of view for the LIDAR system of FIG. 12B.



FIG. 12D illustrates a subject RV sample region and a reference V sample region on the field of view from FIG. 12C.



FIG. 12E illustrates a process flow for generating LIDAR data for sample regions illuminated by a LIDAR system having real data signal cores.



FIG. 13A illustrates an example of process flow that the electronics can use to identify a radial velocity for sample regions SRk . . .



FIG. 13B illustrates a LIDAR system that includes multiple different real data signal cores on a common support.



FIG. 13C is a two-dimensional illustration of the field of view for the LIDAR system of FIG. 13B.



FIG. 13D illustrates a subject RV sample region and a reference V sample region on the field of view from FIG. 13C.



FIG. 13E illustrates an example of process flow that the electronics can use to identify a correct LIDAR data solution for a subject RV sample region and to determine the correct radial velocity for a reference V sample region.



FIG. 13F illustrates a process flow for generating LIDAR data for sample regions illuminated by a LIDAR system having real data signal cores.



FIG. 13G illustrates a portion of the field of view from FIG. 12C or FIG. 13C where the sample regions illuminated by different cores partially overlap



FIG. 14 is a cross section of a portion of a silicon-on-insulator wafer that includes a waveguide.





DESCRIPTION

LIDAR systems typically illuminate multiple different sample regions in a field of view. The systems generate LIDAR data for each of the sample regions. The LIDAR data can indicate the radial velocity and/or distance between the LIDAR system and any object(s) located in each of the fields of view. These systems frequently apply a mathematical transform such as a Fourier transform to a beating signal so as to identify the beat frequency of the beating signal. When a real transform is used, the transform can output multiple different frequencies. In many circumstances, it is not clear which of these frequencies is the correct frequency. As a result, multiple solutions for a sample region's LIDAR data are often possible. In order to identify which of the solutions is correct for a sample region, the LIDAR system compares the possible LIDAR data solution for the sample region with data from other sample regions. As a result, the LIDAR system can use real Fourier transforms rather than complex Fourier transforms. Real Fourier transforms typically require fewer Analog-to-Digital Converters (ADC) than are required by complex Fourier transforms. As a result, the ability to use real Fourier transforms reduces the costs and complexity of the LIDAR system.



FIG. 1A is a topview of a schematic of a LIDAR chip that can serve as a LIDAR core or can be included in an imaging system that includes components in addition to the LIDAR chip. The LIDAR chip can include a Photonic Integrated Circuit (PIC) and can be a Photonic Integrated Circuit chip. The LIDAR chip includes a light source 4 that outputs a preliminary outgoing LIDAR signal. A suitable light source 4 includes, but is not limited to, semiconductor lasers such as External Cavity Lasers (ECLs), Distributed Feedback lasers (DFBs), Discrete Mode (DM) lasers and Distributed Bragg Reflector lasers (DBRs).


The LIDAR chip includes a utility waveguide 12 that receives an outgoing LIDAR signal from a light source 4. The utility waveguide 12 terminates at a facet 14 and carries the outgoing LIDAR signal to the facet 14. The facet 14 can be positioned such that the outgoing LIDAR signal traveling through the facet 14 exits the LIDAR chip and serves as a LIDAR output signal. For instance, the facet 14 can be positioned at an edge of the chip so the outgoing LIDAR signal traveling through the facet 14 exits the chip and serves as the LIDAR output signal. In some instances, the portion of the LIDAR output signal that has exited from the LIDAR chip can also be considered a system output signal. As an example, when the exit of the LIDAR output signal from the LIDAR chip is also an exit of the LIDAR output signal from the LIDAR system, the LIDAR output signal can also be considered a system output signal.


The LIDAR output signal travels away from the LIDAR system through free space in the environment and/or atmosphere in which the LIDAR system is positioned. The LIDAR output signal may be reflected by one or more objects in the path of the LIDAR output signal. When the LIDAR output signal is reflected, at least a portion of the reflected light travels back toward the LIDAR chip as a LIDAR input signal. In some instances, the LIDAR input signal can also be considered a system return signal. As an example, when the exit of the LIDAR output signal from the LIDAR chip is also an exit of the LIDAR output signal from the LIDAR core, the LIDAR input signal can also be considered a system return signal.


The LIDAR input signals can enter the utility waveguide 12 through the facet 14. The portion of the LIDAR input signal that enters the utility waveguide 12 serves as an incoming LIDAR signal. The utility waveguide 12 carries the incoming LIDAR signal to a splitter 16 that moves a portion of the outgoing LIDAR signal from the utility waveguide 12 onto a comparative waveguide 18 as a comparative signal. The comparative waveguide 18 carries the comparative signal to a signal processor 22 for further processing. Although FIG. 1A illustrates a directional coupler operating as the splitter 16, other signal tapping components can be used as the splitter 16. Suitable splitters 16 include, but are not limited to, directional couplers, optical couplers, y-junctions, tapered couplers, and Multi-Mode Interference (MIMI) devices.


The utility waveguide 12 also carrier the outgoing LIDAR signal to the splitter 16. The splitter 16 moves a portion of the outgoing LIDAR signal from the utility waveguide 12 onto a reference waveguide 20 as a reference signal. The reference waveguide 20 carries the reference signal to the signal processor 22 for further processing.


The percentage of light transferred from the utility waveguide 12 by the splitter 16 can be fixed or substantially fixed. For instance, the splitter 16 can be configured such that the power of the reference signal transferred to the reference waveguide 20 is an outgoing percentage of the power of the outgoing LIDAR signal or such that the power of the comparative signal transferred to the comparative waveguide 18 is an incoming percentage of the power of the incoming LIDAR signal. In many splitters 16, such as directional couplers and multimode interferometers (MMIs), the outgoing percentage is equal or substantially equal to the incoming percentage. In some instances, the outgoing percentage is greater than 30%, 40%, or 49% and/or less than 51%, 60%, or 70% and/or the incoming percentage is greater than 30%, 40%, or 49% and/or less than 51%, 60%, or 70%. A splitter 16 such as a multimode interferometer (MMI) generally provides an outgoing percentage and an incoming percentage of 50% or about 50%. However, multimode interferometers (MMIs) can be easier to fabricate in platforms such as silicon-on-insulator platforms than some alternatives. In one example, the splitter 16 is a multimode interferometer (MIMI) and the outgoing percentage and the incoming percentage are 50% or substantially 50%. As will be described in more detail below, the signal processor 22 combines the comparative signal with the reference signal to form a composite signal that carries LIDAR data for a sample region on the field of view. Accordingly, the composite signal can be processed so as to extract LIDAR data (radial velocity and/or distance between a LIDAR core and an object external to the LIDAR core) for the sample region.


The LIDAR chip can include a control branch for controlling operation of the light source 4. The control branch includes a splitter 26 that moves a portion of the outgoing LIDAR signal from the utility waveguide 12 onto a control waveguide 28. The coupled portion of the outgoing LIDAR signal serves as a tapped signal. Although FIG. 1A illustrates a directional coupler operating as the splitter 26, other signal tapping components can be used as the splitter 26. Suitable splitters 26 include, but are not limited to, directional couplers, optical couplers, y-junctions, tapered couplers, and Multi-Mode Interference (MIMI) devices.


The control waveguide 28 carries the tapped signal to control components 30. The control components can be in electrical communication with local electronics 32. All or a portion of the control components can be included in the local electronics 32. During operation, the electronics can employ output from the control components 30 in a control loop configured to control a process variable of one, two, or three loop controlled light signals selected from the group consisting of the tapped signal, the system output signal, and the outgoing LIDAR signal. Examples of the suitable process variables include the frequency of the loop controlled light signal and/or the phase of the loop controlled light signal.


The LIDAR core can be modified so the incoming LIDAR signal and the outgoing LIDAR signal can be carried on different waveguides. For instance, FIG. 1B is a topview of the LIDAR chip of FIG. 1A modified such that the incoming LIDAR signal and the outgoing LIDAR signal are carried on different waveguides. The outgoing LIDAR signal exits the LIDAR chip through the facet 14 and serves as the LIDAR output signal. When light from the LIDAR output signal is reflected by an object external to the LIDAR core, at least a portion of the reflected light returns to the LIDAR chip as a first LIDAR input signal. The first LIDAR input signals enters the comparative waveguide 18 through a facet 35 and serves as the comparative signal. The comparative waveguide 18 carries the comparative signal to a signal processor 22 for further processing. As described in the context of FIG. 1A, the reference waveguide 20 carries the reference signal to the signal processor 22 for further processing. As will be described in more detail below, the signal processor 22 combines the comparative signal with the reference signal to form a composite signal that carries LIDAR data for a sample region on the field of view.


The LIDAR chips can be modified to receive multiple LIDAR input signals. For instance, FIG. 1C illustrates the LIDAR chip of FIG. 1B modified to receive two LIDAR input signals. A splitter 40 is configured to place a portion of the reference signal carried on the reference waveguide 20 on a first reference waveguide 42 and another portion of the reference signal on a second reference waveguide 44. Accordingly, the first reference waveguide 42 carries a first reference signal and the second reference waveguide 44 carries a second reference signal. The first reference waveguide 42 carries the first reference signal to a first signal processor 46 and the second reference waveguide 44 carries the second reference signal to a second signal processor 48. Examples of suitable splitters 40 include, but are not limited to, y-junctions, optical couplers, and multi-mode interference couplers (MMIs).


The outgoing LIDAR signal exits the LIDAR chip through the facet 14 and serves as the LIDAR output signal. When light from the LIDAR output signal is reflected by one or more object located external to the LIDAR core, at least a portion of the reflected light returns to the LIDAR chip as a first LIDAR input signal. The first LIDAR input signals enters the comparative waveguide 18 through the facet 35 and serves as a first comparative signal. The comparative waveguide 18 carries the first comparative signal to a first signal processor 46 for further processing.


Additionally, when light from the LIDAR output signal is reflected by one or more object located external to the LIDAR core, at least a portion of the reflected signal returns to the LIDAR chip as a second LIDAR input signal. The second LIDAR input signals enters a second comparative waveguide 50 through a facet 52 and serves as a second comparative signal carried by the second comparative waveguide 50. The second comparative waveguide 50 carries the second comparative signal to a second signal processor 48 for further processing.


Although the light source 4 is shown as being positioned on the LIDAR chip, the light source 4 can be located off the LIDAR chip. For instance, the utility waveguide 12 can terminate at a second facet through which the outgoing LIDAR signal can enter the utility waveguide 12 from a light source 4 located off the LIDAR chip.


In some instances, a LIDAR chip constructed according to FIG. 1B or FIG. 1C is used in conjunction with a LIDAR adapter. In some instances, the LIDAR adapter can be physically optically positioned between the LIDAR chip and the one or more reflecting objects and/or the field of view in that an optical path that the first LIDAR input signal(s) and/or the LIDAR output signal travels from the LIDAR chip to the field of view passes through the LIDAR adapter. Additionally, the LIDAR adapter can be configured to operate on the first LIDAR input signal and the LIDAR output signal such that the first LIDAR input signal and the LIDAR output signal travel on different optical pathways between the LIDAR adapter and the LIDAR chip but on the same optical pathway between the LIDAR adapter and a reflecting object in the field of view.


An example of a LIDAR adapter that is suitable for use with the LIDAR chip of FIG. 1B is illustrated in FIG. 2. The LIDAR adapter includes multiple components positioned on a base. For instance, the LIDAR adapter includes a circulator 100 positioned on a base 102. The illustrated optical circulator 100 includes three ports and is configured such that light entering one port exits from the next port. For instance, the illustrated optical circulator includes a first port 104, a second port 106, and a third port 108. The LIDAR output signal enters the first port 104 from the utility waveguide 12 of the LIDAR chip and exits from the second port 106.


The LIDAR adapter can be configured such that the output of the LIDAR output signal from the second port 106 can also serve as the output of the LIDAR output signal from the LIDAR adapter and accordingly from the LIDAR core. As a result, the LIDAR output signal can be output from the LIDAR adapter such that the LIDAR output signal is traveling toward a sample region in the field of view. Accordingly, in some instances, the portion of the LIDAR output signal that has exited from the LIDAR adapter can also be considered the system output signal. As an example, when the exit of the LIDAR output signal from the LIDAR adapter is also an exit of the LIDAR output signal from the LIDAR core, the LIDAR output signal can also be considered a system output signal.


The LIDAR output signal output from the LIDAR adapter includes, consists of, or consists essentially of light from the LIDAR output signal received from the LIDAR chip. Accordingly, the LIDAR output signal output from the LIDAR adapter may be the same or substantially the same as the LIDAR output signal received from the LIDAR chip. However, there may be differences between the LIDAR output signal output from the LIDAR adapter and the LIDAR output signal received from the LIDAR chip. For instance, the LIDAR output signal can experience optical loss as it travels through the LIDAR adapter and/or the LIDAR adapter can optionally include an amplifier configured to amplify the LIDAR output signal as it travels through the LIDAR adapter.


When one or more objects in the sample region reflect the LIDAR output signal, at least a portion of the reflected light travels back to the circulator 100 as a system return signal. The system return signal enters the circulator 100 through the second port 106. FIG. 2 illustrates the LIDAR output signal and the system return signal traveling between the LIDAR adapter and the sample region along the same optical path.


The system return signal exits the circulator 100 through the third port 108 and is directed to the comparative waveguide 18 on the LIDAR chip. Accordingly, all or a portion of the system return signal can serve as the first LIDAR input signal and the first LIDAR input signal includes or consists of light from the system return signal. Accordingly, the LIDAR output signal and the first LIDAR input signal travel between the LIDAR adapter and the LIDAR chip along different optical paths.


As is evident from FIG. 2, the LIDAR adapter can include optical components in addition to the circulator 100. For instance, the LIDAR adapter can include components for directing and controlling the optical path of the LIDAR output signal and the system return signal. As an example, the adapter of FIG. 2 includes an optional amplifier 110 positioned so as to receive and amplify the LIDAR output signal before the LIDAR output signal enters the circulator 100. The amplifier 110 can be operated by the local electronics 32 allowing the local electronics 32 to control the power of the LIDAR output signal.



FIG. 2 also illustrates the LIDAR adapter including an optional first lens 112 and an optional second lens 114. The first lens 112 can be configured to couple the LIDAR output signal to a desired location. In some instances, the first lens 112 is configured to focus or collimate the LIDAR output signal at a desired location. In one example, the first lens 112 is configured to couple the LIDAR output signal on the first port 104 when the LIDAR adapter does not include an amplifier 110. As another example, when the LIDAR adapter includes an amplifier 110, the first lens 112 can be configured to couple the LIDAR output signal on the entry port to the amplifier 110. The second lens 114 can be configured to couple the LIDAR output signal at a desired location. In some instances, the second lens 114 is configured to focus or collimate the LIDAR output signal at a desired location. For instance, the second lens 114 can be configured to couple the LIDAR output signal the on the facet 35 of the comparative waveguide 18.


The LIDAR adapter can also include one or more direction changing components such as mirrors. FIG. 2 illustrates the LIDAR adapter including a mirror as a direction-changing component 116 that redirects the system return signal from the circulator 100 to the facet 20 of the comparative waveguide 18.


The LIDAR chips include one or more waveguides that constrains the optical path of one or more light signals. While the LIDAR adapter can include waveguides, the optical path that the system return signal and the LIDAR output signal travel between components on the LIDAR adapter and/or between the LIDAR chip and a component on the LIDAR adapter can be free space. For instance, the system return signal and/or the LIDAR output signal can travel through the environment and/or atmosphere in which the LIDAR chip, the LIDAR adapter, and/or the base 102 is positioned when traveling between the different components on the LIDAR adapter and/or between a component on the LIDAR adapter and the LIDAR chip. As a result, optical components such as lenses and direction changing components can be employed to control the characteristics of the optical path traveled by the system return signal and the LIDAR output signal on, to, and from the LIDAR adapter.


Suitable bases 102 for the LIDAR adapter include, but are not limited to, substrates, platforms, and plates. Suitable substrates include, but are not limited to, glass, silicon, and ceramics. The components can be discrete components that are attached to the substrate. Suitable techniques for attaching discrete components to the base 102 include, but are not limited to, epoxy, solder, and mechanical clamping. In one example, one or more of the components are integrated components and the remaining components are discrete components. In another example, the LIDAR adapter includes one or more integrated amplifiers and the remaining components are discrete components.


The LIDAR core can be configured to compensate for polarization. Light from a laser source is typically linearly polarized and hence the LIDAR output signal is also typically linearly polarized. Reflection from an object may change the angle of polarization of the returned light. Accordingly, the system return signal can include light of different linear polarization states. For instance, a first portion of a system return signal can include light of a first linear polarization state and a second portion of a system return signal can include light of a second linear polarization state. The intensity of the resulting composite signals is proportional to the square of the cosine of the angle between the comparative and reference signal polarization fields. If the angle is 90°, the LIDAR data can be lost in the resulting composite signal. However, the LIDAR core can be modified to compensate for changes in polarization state of the LIDAR output signal.



FIG. 3 illustrates the LIDAR core of FIG. 3 modified such that the LIDAR adapter is suitable for use with the LIDAR chip of FIG. 1C. The LIDAR adapter includes a beamsplitter 120 that receives the system return signal from the circulator 100. The beamsplitter 120 splits the system return signal into a first portion of the system return signal and a second portion of the system return signal. Suitable beamsplitters include, but are not limited to, Wollaston prisms, and MEMS-based beamsplitters.


The first portion of the system return signal is directed to the comparative waveguide 18 on the LIDAR chip and serves as the first LIDAR input signal described in the context of FIG. 1C. The second portion of the system return signal is directed a polarization rotator 122. The polarization rotator 122 outputs a second LIDAR input signal that is directed to the second input waveguide 76 on the LIDAR chip and serves as the second LIDAR input signal.


The beamsplitter 120 can be a polarizing beam splitter. One example of a polarizing beamsplitter is constructed such that the first portion of the system return signal has a first polarization state but does not have or does not substantially have a second polarization state and the second portion of the system return signal has a second polarization state but does not have or does not substantially have the first polarization state. The first polarization state and the second polarization state can be linear polarization states and the second polarization state is different from the first polarization state. For instance, the first polarization state can be TE and the second polarization state can be TM or the first polarization state can be TM and the second polarization state can be TE. In some instances, the laser source can linearly polarized such that the LIDAR output signal has the first polarization state. Suitable beamsplitters include, but are not limited to, Wollaston prisms, and MEMs-based polarizing beamsplitters.


A polarization rotator can be configured to change the polarization state of the first portion of the system return signal and/or the second portion of the system return signal. For instance, the polarization rotator 122 shown in FIG. 3 can be configured to change the polarization state of the second portion of the system return signal from the second polarization state to the first polarization state. As a result, the second LIDAR input signal has the first polarization state but does not have or does not substantially have the second polarization state. Accordingly, the first LIDAR input signal and the second LIDAR input signal each have the same polarization state (the first polarization state in this example). Despite carrying light of the same polarization state, the first LIDAR input signal and the second LIDAR input signal are associated with different polarization states as a result of the use of the polarizing beamsplitter. For instance, the first LIDAR input signal carries the light reflected with the first polarization state and the second LIDAR input signal carries the light reflected with the second polarization state. As a result, the first LIDAR input signal is associated with the first polarization state and the second LIDAR input signal is associated with the second polarization state.


Since the first LIDAR input signal and the second LIDAR carry light of the same polarization state, the comparative signals that result from the first LIDAR input signal have the same polarization angle as the comparative signals that result from the second LIDAR input signal.


Suitable polarization rotators include, but are not limited to, rotation of polarization-maintaining fibers, Faraday rotators, half-wave plates, MEMs-based polarization rotators and integrated optical polarization rotators using asymmetric y-branches, Mach-Zehnder interferometers and multi-mode interference couplers.


Since the outgoing LIDAR signal is linearly polarized, the first reference signals can have the same linear polarization state as the second reference signals. Additionally, the components on the LIDAR adapter can be selected such that the first reference signals, the second reference signals, the comparative signals and the second comparative signals each have the same polarization state. In the example disclosed in the context of FIG. 3, the first comparative signals, the second comparative signals, the first reference signals, and the second reference signals can each have light of the first polarization state.


As a result of the above configuration, first composite signals generated by the first signal processor 46 and second composite signals generated by the second signal processor 48 each results from combining a reference signal and a comparative signal of the same polarization state and will accordingly provide the desired beating between the reference signal and the comparative signal. For instance, the composite signal results from combining a first reference signal and a first comparative signal of the first polarization state and excludes or substantially excludes light of the second polarization state or the composite signal results from combining a first reference signal and a first comparative signal of the second polarization state and excludes or substantially excludes light of the first polarization state. Similarly, the second composite signal includes a second reference signal and a second comparative signal of the same polarization state will accordingly provide the desired beating between the reference signal and the comparative signal. For instance, the second composite signal results from combining a second reference signal and a second comparative signal of the first polarization state and excludes or substantially excludes light of the second polarization state or the second composite signal results from combining a second reference signal and a second comparative signal of the second polarization state and excludes or substantially excludes light of the first polarization state.


The above configuration results in the LIDAR data for a single sample region in the field of view being generated from multiple different composite signals (i.e. first composite signals and the second composite signal) from the sample region. In some instances, determining the LIDAR data for the sample region includes the electronics combining the LIDAR data from different composite signals (i.e. the composite signals and the second composite signal). Combining the LIDAR data can include taking an average, median, or mode of the LIDAR data generated from the different composite signals. For instance, the electronics can average the distance between the LIDAR core and the reflecting object determined from the composite signal with the distance determined from the second composite signal and/or the electronics can average the radial velocity between the LIDAR core and the reflecting object determined from the composite signal with the radial velocity determined from the second composite signal.


In some instances, determining the LIDAR data for a sample region includes the electronics identifying one or more composite signals (i.e. the composite signal and/or the second composite signal) as the source of the LIDAR data that is most represents reality (the representative LIDAR data). The electronics can then use the LIDAR data from the identified composite signal as the representative LIDAR data to be used for additional processing. For instance, the electronics can identify the signal (composite signal or the second composite signal) with the larger amplitude as having the representative LIDAR data and can use the LIDAR data from the identified signal for further processing by the LIDAR core. In some instances, the electronics combine identifying the composite signal with the representative LIDAR data with combining LIDAR data from different LIDAR signals. For instance, the electronics can identify each of the composite signals with an amplitude above an amplitude threshold as having representative LIDAR data and when more than two composite signals are identified as having representative LIDAR data, the electronics can combine the LIDAR data from each of identified composite signals. When one composite signal is identified as having representative LIDAR data, the electronics can use the LIDAR data from that composite signal as the representative LIDAR data. When none of the composite signals is identified as having representative LIDAR data, the electronics can discard the LIDAR data for the sample region associated with those composite signals.


Although FIG. 3 is described in the context of components being arranged such that the first comparative signals, the second comparative signals, the first reference signals, and the second reference signals each have the first polarization state, other configurations of the components in FIG. 3 can arranged such that the composite signals result from combining a reference signal and a comparative signal of the same linear polarization state and the second composite signal results from combining a reference signal and a comparative signal of the same linear polarization state. For instance, the beamsplitter 120 can be constructed such that the second portion of the system return signal has the first polarization state and the first portion of the system return signal has the second polarization state, the polarization rotator receives the first portion of the system return signal, and the outgoing LIDAR signal can have the second polarization state. In this example, the first LIDAR input signal and the second LIDAR input signal each has the second polarization state.


The above system configurations result in the first portion of the system return signal and the second portion of the system return signal being directed into different composite signals. As a result, since the first portion of the system return signal and the second portion of the system return signal are each associated with a different polarization state but electronics can process each of the composite signals, the LIDAR core compensates for changes in the polarization state of the LIDAR output signal in response to reflection of the LIDAR output signal.


The LIDAR adapter of FIG. 3 can include additional optical components including passive optical components. For instance, the LIDAR adapter can include an optional third lens 126. The third lens 126 can be configured to couple the second LIDAR output signal at a desired location. In some instances, the third lens 126 focuses or collimates the second LIDAR output signal at a desired location. For instance, the third lens 126 can be configured to focus or collimate the second LIDAR output signal on the facet 52 of the second comparative waveguide 50. The LIDAR adapter also includes one or more direction changing components 124 such as mirrors and prisms. FIG. 3 illustrates the LIDAR adapter including a mirror as a direction changing component 124 that redirects the second portion of the system return signal from the circulator 100 to the facet 52 of the second comparative waveguide 50 and/or to the third lens 126.


When the LIDAR core includes a LIDAR chip and a LIDAR adapter, the LIDAR chip, electronics, and the LIDAR adapter can be positioned on a common mount. Suitable common mounts include, but are not limited to, glass plates, metal plates, silicon plates and ceramic plates. As an example, FIG. 4 is a topview of a LIDAR core that includes the LIDAR chip and local electronics 32 of FIG. 1A and the LIDAR adapter of FIG. 2 on a common support 140. Although the local electronics 32 are illustrated as being located on the common support, all or a portion of the electronics can be located off the common support. When the light source 4 is located off the LIDAR chip, the light source can be located on the common support 140 or off of the common support 140. Suitable approaches for mounting the LIDAR chip, electronics, and/or the LIDAR adapter on the common support include, but are not limited to, epoxy, solder, and mechanical clamping.



FIG. 5A through FIG. 5C illustrate an example of a suitable signal processor for use as all or a fraction of the signal processors selected from the group consisting of the signal processor 22, the first signal processor 46 and the second signal processor 48. The signal processor receives a comparative signal from a comparative waveguide 196 and a reference signal from a reference waveguide 198. The comparative waveguide 18 and the reference waveguide 20 shown in FIG. 1A and FIG. 1B can serve as the comparative waveguide 196 and the reference waveguide 198, the comparative waveguide 18 and the first reference waveguide 42 shown in FIG. 1C can serve as the comparative waveguide 196 and the reference waveguide 198, or the second comparative waveguide 50 and the second reference waveguide 44 shown in FIG. 1C can serve as the comparative waveguide 196 and the reference waveguide 198.


The signal processor includes a second splitter 200 that divides the comparative signal carried on the comparative waveguide 196 onto a first comparative waveguide 204 and a second comparative waveguide 206. The first comparative waveguide 204 carries a first portion of the comparative signal to the signal combiner 211. The second comparative waveguide 208 carries a second portion of the comparative signal to the second signal combiner 212.


The signal processor includes a first splitter 202 that divides the reference signal carried on the reference waveguide 198 onto a first reference waveguide 204 and a second reference waveguide 206. The first reference waveguide 204 carries a first portion of the reference signal to the signal combiner 211. The second reference waveguide 208 carries a second portion of the reference signal to the second signal combiner 212.


The second signal combiner 212 combines the second portion of the comparative signal and the second portion of the reference signal into a second composite signal. Due to the difference in frequencies between the second portion of the comparative signal and the second portion of the reference signal, the second composite signal is beating between the second portion of the comparative signal and the second portion of the reference signal.


The second signal combiner 212 also splits the resulting second composite signal onto a first auxiliary detector waveguide 214 and a second auxiliary detector waveguide 216. The first auxiliary detector waveguide 214 carries a first portion of the second composite signal to a first auxiliary light sensor 218 that converts the first portion of the second composite signal to a first auxiliary electrical signal. The second auxiliary detector waveguide 216 carries a second portion of the second composite signal to a second auxiliary light sensor 220 that converts the second portion of the second composite signal to a second auxiliary electrical signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).


In some instances, the second signal combiner 212 splits the second composite signal such that the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) included in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the second portion of the second composite signal but the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the second portion of the second composite signal is not phase shifted relative to the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the first portion of the second composite signal. Alternately, the second signal combiner 212 splits the second composite signal such that the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the second portion of the second composite signal but the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the first portion of the second composite signal is not phase shifted relative to the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the second portion of the second composite signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).


The first signal combiner 211 combines the first portion of the comparative signal and the first portion of the reference signal into a first composite signal. Due to the difference in frequencies between the first portion of the comparative signal and the first portion of the reference signal, the first composite signal is beating between the first portion of the comparative signal and the first portion of the reference signal.


The first signal combiner 211 also splits the first composite signal onto a first detector waveguide 221 and a second detector waveguide 222. The first detector waveguide 221 carries a first portion of the first composite signal to a first light sensor 223 that converts the first portion of the second composite signal to a first electrical signal. The second detector waveguide 222 carries a second portion of the second composite signal to a second light sensor 224 that converts the second portion of the second composite signal to a second electrical signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).


In some instances, the signal combiner 211 splits the first composite signal such that the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) included in the first portion of the composite signal is phase shifted by 180° relative to the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the second portion of the composite signal but the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the first portion of the composite signal is not phase shifted relative to the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the second portion of the composite signal. Alternately, the signal combiner 211 splits the composite signal such that the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the first portion of the composite signal is phase shifted by 180° relative to the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the second portion of the composite signal but the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the first portion of the composite signal is not phase shifted relative to the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the second portion of the composite signal.


When the second signal combiner 212 splits the second composite signal such that the portion of the comparative signal in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the comparative signal in the second portion of the second composite signal, the signal combiner 211 also splits the composite signal such that the portion of the comparative signal in the first portion of the composite signal is phase shifted by 180° relative to the portion of the comparative signal in the second portion of the composite signal. When the second signal combiner 212 splits the second composite signal such that the portion of the reference signal in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the reference signal in the second portion of the second composite signal, the signal combiner 211 also splits the composite signal such that the portion of the reference signal in the first portion of the composite signal is phase shifted by 180° relative to the portion of the reference signal in the second portion of the composite signal.


The first reference waveguide 210 and the second reference waveguide 208 are constructed to provide a phase shift between the first portion of the reference signal and the second portion of the reference signal. For instance, the first reference waveguide 210 and the second reference waveguide 208 can be constructed so as to provide a 90 degree phase shift between the first portion of the reference signal and the second portion of the reference signal. As an example, one reference signal portion can be an in-phase component and the other a quadrature component. Accordingly, one of the reference signal portions can be a sinusoidal function and the other reference signal portion can be a cosine function. In one example, the first reference waveguide 210 and the second reference waveguide 208 are constructed such that the first reference signal portion is a cosine function and the second reference signal portion is a sine function. Accordingly, the portion of the reference signal in the second composite signal is phase shifted relative to the portion of the reference signal in the first composite signal, however, the portion of the comparative signal in the first composite signal is not phase shifted relative to the portion of the comparative signal in the second composite signal.


The first light sensor 223 and the second light sensor 224 can be connected as a balanced detector and the first auxiliary light sensor 218 and the second auxiliary light sensor 220 can also be connected as a balanced detector. For instance, FIG. 5B provides a schematic of the relationship between the electronics, the first light sensor 223, the second light sensor 224, the first auxiliary light sensor 218, and the second auxiliary light sensor 220. The symbol for a photodiode is used to represent the first light sensor 223, the second light sensor 224, the first auxiliary light sensor 218, and the second auxiliary light sensor 220 but one or more of these sensors can have other constructions. In some instances, all of the components illustrated in the schematic of FIG. 5B are included on the LIDAR chip. In some instances, the components illustrated in the schematic of FIG. 5B are distributed between the LIDAR chip and electronics located off of the LIDAR chip.


The electronics connect the first light sensor 223 and the second light sensor 224 as a first balanced detector 225 and the first auxiliary light sensor 218 and the second auxiliary light sensor 220 as a second balanced detector 226. In particular, the first light sensor 223 and the second light sensor 224 are connected in series. Additionally, the first auxiliary light sensor 218 and the second auxiliary light sensor 220 are connected in series. The serial connection in the first balanced detector is in communication with a first data line 228 that carries the output from the first balanced detector as a first data signal. The serial connection in the second balanced detector is in communication with a second data line 232 that carries the output from the second balanced detector as a second data signal. The first data signal is an electrical representation of the first composite signal and the second data signal is an electrical representation of the second composite signal. Accordingly, the first data signal includes a contribution from a first waveform and a second waveform and the second data signal is a composite of the first waveform and the second waveform. The portion of the first waveform in the first data signal is phase-shifted relative to the portion of the first waveform in the first data signal but the portion of the second waveform in the first data signal being in-phase relative to the portion of the second waveform in the first data signal. For instance, the second data signal includes a portion of the reference signal that is phase shifted relative to a different portion of the reference signal that is included the first data signal. Additionally, the second data signal includes a portion of the comparative signal that is in-phase with a different portion of the comparative signal that is included in the first data signal. The first data signal and the second data signal are beating as a result of the beating between the comparative signal and the reference signal, i.e. the beating in the first composite signal and in the second composite signal.


The local electronics 32 include a beat frequency finder 238 configured to find the beat frequency of the composite signal. The beat frequency finder 238 can perform a mathematical transform on the first data signal and the second data signal. For instance, the mathematical transform can be a complex Fourier transform with the first data signal and the second data signal as inputs. Since the first data signal is an in-phase component and the second data signal its quadrature component, the first data signal and the second data signal together act as a complex data signal where the first data signal is the real component and the second data signal is the imaginary component of the input.


The beat frequency finder 238 includes a first Analog-to-Digital Converter (ADC) 264 that receives the first data signal from the first data line 228. The first Analog-to-Digital Converter (ADC) 264 converts the first data signal from an analog form to a digital form and outputs a first digital data signal. The beat frequency finder 238 includes a second Analog-to-Digital Converter (ADC) 266 that receives the second data signal from the second data line 232. The second Analog-to-Digital Converter (ADC) 266 converts the second data signal from an analog form to a digital form and outputs a second digital data signal. The first digital data signal is a digital representation of the first data signal and the second digital data signal is a digital representation of the second data signal. Accordingly, the first digital data signal and the second digital data signal act together as a complex signal where the first digital data signal acts as the real component of the complex signal and the second digital data signal acts as the imaginary component of the complex data signal.


The beat frequency finder 238 includes a transformer 268 that receives the complex data signal. For instance, the transformer 268 receives the first digital data signal from the first Analog-to-Digital Converter (ADC) 264 as an input and also receives the second digital data signal from the second Analog-to-Digital Converter (ADC) 266 as an input. The transformer 268 can be configured to perform a mathematical transform on the complex signal so as to convert from the time domain to the frequency domain. The mathematical transform can be a complex transform such as a complex Fast Fourier Transform (FFT). A complex transform such as a complex Fast Fourier Transform (FFT) provides an unambiguous solution for the shift in frequency of LIDAR input signal relative to the LIDAR output signal that is caused by the radial velocity between the reflecting object and the LIDAR chip. The electronics use the one or more frequency peaks output from the transformer 268 for further processing to generate the LIDAR data (distance and/or radial velocity between the reflecting object and the LIDAR chip or LIDAR core). The local electronics 32 can include a peak finder (not shown) to identify the beat frequencies of the one or more frequency peaks.


The local electronics 32 can include a preliminary LIDAR data generator 269 configured receive the beat frequencies from the transformer 268. The preliminary LIDAR data generator 269 is configured to generate preliminary LIDAR data for a sample region. LIDAR data for a sample region includes values for the radial velocity and/or separation distance between the LIDAR system and an object in the sample region. In some instances, the LIDAR data includes an indicator that the LIDAR data for that sample region is “not available.” In some instances, the preliminary LIDAR data generator 269 calculates LIDAR data that serves as the preliminary LIDAR data. For instance, in some instances, the preliminary LIDAR data generator 269 calculates the radial velocity and/or separation distance between the LIDAR system and an object in the sample region as the preliminary LIDAR data. As will be discussed below, other examples of preliminary LIDAR data that can be calculated by the preliminary data generator include, but are not limited to, possible solutions for the LIDAR data for the sample region and a potential radial velocity magnitude indicator and/or a velocity magnitude indicator for the sample region. The transformer 268 and/or the preliminary LIDAR data generator 269 can execute the attributed functions using firmware, hardware or software or a combination thereof<-Is this last sentence correct? XXX


Although FIG. 5A illustrates signal combiners that combine a portion of the reference signal with a portion of the comparative signal, the signal processor can include a single signal combiner that combines the reference signal with the comparative signal so as to form a composite signal. As a result, at least a portion of the reference signal and at least a portion of the comparative signal can be combined to form a composite signal. The combined portion of the reference signal can be the entire reference signal or a fraction of the reference signal and the combined portion of the comparative signal can be the entire comparative signal or a fraction of the comparative signal.


The electronics tune the frequency of the system output signal over time. The system output signal has a frequency versus time pattern with a repeated cycle. FIG. 5C shows an example of a suitable frequency versus time pattern for the system output signal. The base frequency of the system output signal (fo) can be the frequency of the system output signal at the start of a cycle.



FIG. 5C shows frequency versus time for a sequence of two cycles labeled cyclej and cyclej+1. In some instances, the frequency versus time pattern is repeated in each cycle as shown in FIG. 5C. The illustrated cycles do not include re-location periods and/or re-location periods are not located between cycles. As a result, FIG. 5C illustrates the results for a continuous scan.


Each cycle includes M data periods that are each associated with a period index m and are labeled DPm. In the example of FIG. 5C, each cycle includes three data periods labeled DPm with m=1 and 2. In some instances, the frequency versus time pattern is the same for the data periods that correspond to each other in different cycles as is shown in FIG. 5C. Corresponding data periods are data periods with the same period index. As a result, each data period DP1 can be considered corresponding data periods and the associated frequency versus time patterns are the same in FIG. 5C. At the end of a cycle, the electronics return the frequency to the same frequency level at which it started the previous cycle.


During the data period DPm, the electronics operate the light source such that the frequency of the system output signal changes at a linear rate am (the chirp rate). In FIG. 5C, α2=−α1.



FIG. 5C labels sample regions that are each associated with a sample region index k and are labeled SRk. FIG. 5C labels sample regions SRk−1 through SRk+1. Each sample region is illuminated with the system output signal during the data periods that FIG. 5C shows as associated with the sample region. For instance, sample region SRk+1 is illuminated with the system output signal during the data period labeled DP2 within cycle j+1 and the data period labeled DP1 within cycle j+1. Accordingly, the sample region labeled SRk+1 is associated with the data periods labeled DP1 and DP2 within cycle j+1. The sample region indices k can be assigned relative to time. For instance, the samples regions can be illuminated by the system output signal in the sequence indicated by the index k. As a result, the sample region SR10 can be illuminated after sample region SR9 and before SR11.


The frequency output from the Complex Fourier transform represents the beat frequency of the composite signals that each includes a comparative signal beating against a reference signal. The beat frequencies from two or more different data periods that are associated with the same sample region can be combined to generate the LIDAR data. For instance, the beat frequency determined from DP1 during the illumination of sample region SRk can be combined with the beat frequency determined from DP2 during the illumination of sample region SRk to determine the LIDAR data for sample region SRk. As an example, the following equation applies during a data period where electronics increase the frequency of the outgoing LIDAR signal during the data period such as occurs in data period DP1 of FIG. 5C: fub=−fduτ where fub is the frequency provided by the transformer 268, fd represents the Doppler shift (fd=2Vkfc/c) where fc represents the optical frequency (fo), c represents the speed of light, Vk is the radial velocity between the reflecting object and the LIDAR core where the direction from the reflecting object toward the chip is assumed to be the positive direction, c is the speed of light, and au represents a chirp rate (αm) for the data period where the frequency of the system output signal increases with time (α1 in this case). The radial velocity can be a relative radial velocity since the LIDAR core and the object can both be in motion or the LIDAR core and/or the object can be stationary.


The following equation applies during a data period where the frequency of the outgoing LIDAR signal decreases such as occurs in data period DP2 of FIG. 5C: fdb=−fd−αdτ where fdb, is a frequency provided by the transformer 268, and aa represents the chirp rate (αm) for the data period where the frequency of the system output signal increases with time (α2 in this case). In these two equations, fd and τ are unknowns. These equations can be solved for the two unknowns. The electronics, such as a preliminary LIDAR data generator 269, can calculate the radial velocity for sample region k (Vk) from the Doppler shift (Vk=c*fd/(2fc)) and/or the separation distance for sample region k (Rk) from c*−τ/2.


When a LIDAR core includes a signal processor constructed as disclosed in the context of FIG. 5A through FIG. 5C and the electronics are configured to calculate the radial velocity and distance between the LIDAR core and the objects for each of the sample regions, the LIDAR core is suitable for use as a range and velocity core. However, a LIDAR core can include a signal processor constructed as disclosed in the context of FIG. 5A through FIG. 5B and the electronics can be configured to calculate the radial velocity without calculating the distance. In these instances, the LIDAR core is suitable for use as a velocity core.


In an example of a velocity core, the electronics can operate the light source 4 such that the system output signal having a frequency that is a function of time disclosed in the context of FIG. 5C is replaced by a system output signal where the frequency of the system output signal is not a function of time. For instance, the frequency of the system output signal can be constant during each of the data periods shown in FIG. 5C. As an example, the system output signal can be a continuous wave (CW). For instance, the outgoing LIDAR signal, and accordingly the system output signal, can be an unchirped continuous wave (CW). As an example, the outgoing LIDAR signal, and accordingly the LIDAR output signal, can be represented by Equation 2: G*cos(H*t) where G and H are constants and t represents time. In some instances, G represents the square root of the power of the outgoing LIDAR signal.


Since the frequency of system output signal is constant, changing the distance between reflecting object and LIDAR chip does not cause a change to the frequency of the LIDAR input signal. As a result, the separation distance does not contribute to the shift in the frequency of the LIDAR input signal relative to the frequency of the LIDAR output signal. Accordingly, the effect of the separation distance has been removed or substantially from the shift in the frequency of the LIDAR input signal relative to the frequency of the system output signal.


A velocity core can have a signal processor constructed as disclosed in the context of FIG. 5A and FIG. 5B. As discussed above in the context of FIG. 5B, the first Analog-to-Digital Converter (ADC) 264 receives the first data signal 228 and outputs a first digital data signal. The second Analog-to-Digital Converter (ADC) 266 receives the second data signal and outputs a second digital data signal. The first digital data signal is a digital representation of the first data signal and the second digital data signal is a digital representation of the second data signal. Accordingly, the first digital data signal and the second digital data signal act together as a complex signal where the first digital data signal acts as the real component of the complex signal and the second digital data signal acts as the imaginary component of the complex data signal.


The transformer 268 receives the complex data signal. The transformer 268 can be configured to perform a mathematical transform on the complex signal so as to convert from the time domain to the frequency domain. The mathematical transform can be a complex transform such as a complex Fast Fourier Transform (FFT). A complex transform such as a complex Fast Fourier Transform (FFT) provides an unambiguous solution for the shift in frequency of LIDAR input signal relative to the LIDAR output signal that is caused by the radial velocity between the reflecting object and the LIDAR chip. Since the frequency shift provided by the transformer 268 does not have input from a frequency shift due to the separation distance between the reflecting object and the LIDAR chip, and because of the complex nature of the velocity data signal, the output of the transformer 268 can be used to calculate the radial velocity between the reflecting object and the LIDAR chip. For instance, the electronics, such as a preliminary LIDAR data generator, can approximate the radial velocity between the reflecting object and the LIDAR chip (v) using Equation 4: v=c*fd/(2*G) where fd is approximated as the peak frequency output from the transformer 268, c is the speed of light, and fd represents the frequency of the LIDAR output signal. As a result, multiple data periods and/or chirping are not needed for a velocity core to calculate the radial velocity.


The first Analog-to-Digital Converter (ADC) 264 converts the first data signal to the first digital data signal by sampling the first data signal at a sampling rate. Similarly, the second Analog-to-Digital Converter (ADC) 266 converts the second data signal to the second digital data signal by sampling the second data signal at a sampling rate. When a velocity core uses a continuous wave as a first data signal and a second data signal, the effects of distance between the reflecting object and LIDAR chip are effectively removed from the beating of the composite signal and the resulting electrical signals. Accordingly, the beat frequency of the composite signal is reduced and the required sampling rate is reduced. For instance, the sampling rate of the Analog-to-Digital Converters in a range and velocity core can be on the order of 4GSPS. In contrast, the sampling rate of the Analog-to-Digital Converters in a velocity core can be on the order of 400 MSPS.


Although the signal processor of FIG. 5A and FIG. 5B are disclosed in the context of a complex data signal, a real data signal can be used. As a result, the signal processor of FIG. 5A and FIG. 5B can be modified so as to exclude the components associated with the second data signal together and the transformer 268 can be configured to real Fourier Transform (FFT). Accordingly, the signal processor can include a single signal combiner 211 and a single Analog-to-Digital Converters (ADC).


A LIDAR system can include one or more range and velocity cores and one or more velocity cores. As an example, FIG. 6A illustrates a LIDAR system that includes multiple different cores on a common support 140. Each of the LIDAR cores can be constructed as disclosed in the context of FIG. 1A through FIG. 5 or can have an alternate construction. One or more of the LIDAR cores can be a velocity core and one or more of the LIDAR cores can be a range and velocity core. For the purpose of illustration, the LIDAR cores in the LIDAR system of FIG. 6A include one range and velocity core 270 and three velocity cores 272.


Each of the LIDAR cores outputs a different system output signal. The system output signal are each received by a redirection component 274 that re-directs the system output signals. Suitable redirection component 274 include, but are not limited to, convex lenses and concave mirrors. The system output signal output from the collimator 274 are received by one or more beam steering components 276 that output the system output signals. The direction that the system output signal travels away from the LIDAR system is labeled d2 in FIG. 6A. The electronics can operate the one or more beam steering components 276 so as to steer the each of the system output signal to different sample regions in a field of view. As is evident from the arrows labeled A and B in FIG. 6A, the one or more beam steering components 276 can be configured such that the electronics can steer the system output signals in two dimensions or in three dimensions. As a result, the one or more beam steering components 276 can function as a beam-steering mechanism that is operated by the electronics so as to steer the system output signals within the field of view of the LIDAR system. Suitable beam steering components 276 include, but are not limited to, movable mirrors, MEMS mirrors, optical phased arrays (OPAs), optical gratings, and actuated optical gratings. In some instances, the redirection component 274 and/or the one or more beam steering components 276 are configured to operate on the system output signals such that the system output signals are collimated or substantially collimated as they travel away from the LIDAR system. Additionally or alternately, the LIDAR system can include one or more collimating optical components (not illustrated) that operate on the system output signals such that the system output signals are collimated or substantially collimated as they travel away from the LIDAR system.


As noted above, the sampling rates for the ADCs on the velocity cores can be lower than the sampling rates for the ADCs on the range and velocity cores. In some instances, the LIDAR assembly includes velocity cores having ADCs with sampling rates greater than 100, 200, or 300 and less than 500, 800, or 1000 MSPS (mega-samples per second) and/or the range and velocity cores having ADCs with sampling rates greater than 1, 2, or 3 and less than 5, 8, or 10 GSPS (giga-samples per second). In some instances, a ratio of the sampling rates for the ADCs on the one or more velocity cores to the sampling rates on the one or more range and velocity cores is greater than 2:1, 5:1, or 10:1 and less than 15:1, 50:1, or 100:1. Additionally or alternately, the velocity cores can be lower bandwidth cores than the range and velocity cores. As an example, the light detectors in the velocity cores can be used with Transimpedance Amplifiers (TIAs) that convert the current of the resulting data signal output from the light detectors to a voltage. For instance, FIG. 5B illustrates Transimpedance Amplifiers 282 that are optionally positioned along the first data line 228 and the second data line 232 so as to convert the current of the first data signal to a voltage and so as to convert the current of the second data signal to a voltage. The Transimpedance Amplifiers (TIAs) and the associated circuitry in the one or more velocity cores can be configured to operate at lower bandwidth than the Transimpedance Amplifiers (TIAs) and the associated circuitry in the one or more range and velocity cores. In some instances, the LIDAR assembly includes one or more velocity cores that operate at a bandwidth range of more than 0.1 Gz, 0.2 Gz, or 0.3 Gz and less than 0.5 Gz, 1 Gz, or 2 Gz and one or more range and velocity cores that operate at a bandwidth range of more than 0.5 Gz, 1 Gz, or 1.5 Gz and less than 2 Gz, 3 Gz, or 5 Gz. In some instances, a ratio of the bandwidth for the one or more range and velocity cores: the bandwidth for the one or more velocity cores is greater than 2:1, 4:1, or 8:1 and less than 10:1, 15:1, or 20:1. The reduced bandwidth requirements of the velocity cores relative to the range and velocity cores allows the velocity cores to operate at lower frequencies and accordingly reduces the costs associated with the components on the velocity cores. In some instances, the LIDAR assembly includes one or more velocity cores that have TIAs that operate at a bandwidth range of more than 0.1 Gz, 0.2 Gz, or 0.3 Gz and less than 0.5 Gz, 1 Gz, or 2 Gz and one or more range and velocity cores that have TIAs that operate at a bandwidth range of more than 0.5 Gz, 1 Gz, or 1.5 Gz and less than 2 Gz, 3 Gz, or 5 Gz. In some instances, a ratio of the bandwidth for the TIAs in the one or more range and velocity cores: the bandwidth for the TIAs in the one or more velocity cores is greater than 2:1, 4:1, or 8:1 and less than 10:1, 15:1, or 20:1.



FIG. 6B illustrates a relationship between the data periods disclosed in FIG. 5C and the field of view for the LIDAR system. For the purposes of illustration, only the system output signal for range and velocity core C1 is illustrated. The field of view is defined by a collection of sample regions labeled SRk−1 through SRk+1. The illustrated system output signal is scanned in the direction of the solid line labeled “scan.” The system output signal is scanned through a series of the sample regions (SRk−1 through SRk+1). The collection of sample regions that are scanned by the system output signal make up the field of view for the LIDAR system. Object(s) in the field of view can change with time. As a result, the locations of the sample regions are determined relative to the LIDAR system rather than relative to the environment in which the LIDAR system is positioned. For instance, the sample regions can be defined as being located within a range of angles relative to the LIDAR system. The dashed line labeled scan in FIG. 6B illustrates that the scan of the sample regions can be repeated in multiple scan cycles. Accordingly, each scan cycle can scan the system output signal through the same sample regions when the objects in the field of view have moved and/or changed. The sample regions in the field of view can be scanned in the same sequence during different scan cycles or can be scanned in different sequences in different scan cycles.


The portion of each sample region that corresponds to one of the data periods are each labeled DP1 or DP2 in FIG. 6B. As is evident from FIG. 5C, the chirp rate of the system output signal from core C1 during data period DP1 is al and the chirp rate during the data period DP2 is az. As noted above, the system output signals from the velocity cores C2-C4 may not be frequency chirped and may have a constant frequency.


Each of the sample regions includes a dashed line labeled Lk. The dashed line labeled Lk can serve as a location reference line for sample region k. FIG. 6B includes location reference lines labeled Lk−1 through Lk+1. In FIG. 6B, each of the location reference lines (Lk) is drawn along the longitudinal axis of sample region with sample region index k.



FIG. 6B illustrate multiple orientation angles labeled θk where k represents the sample region index k. The orientation angle θk can measure the angular orientation of sample region SRk relative to the LIDAR system. In some instances, the orientation angles θk are measured relative to the location reference lines Lk as shown in FIG. 6B. As a result, the orientation angles θk can measure the angular orientation of the location reference lines Lk. Because FIG. 6B illustrates the LIDAR system having a two-dimensional field of view, a single angle (θk) can define the angular orientation of sample region SRk relative to the LIDAR system; however, the field of view is generally three dimensional. As a result, the LIDAR system can use two or more angles and/or other variables to define the orientation of a sample region relative to the LIDAR system.



FIG. 6C illustrates objects in the field of view disclosed in the context of FIG. 6B. For instance, two different objects located are located in the field of view of the LIDAR system. Each of the location reference lines (Lk) extends a distance Rk from the LIDAR system to a field location labeled flk. The distance Rk represents the value that the electronics in the range and velocity core determine for the distance between the LIDAR system and an object as a result of the system output signal transmitted during illumination of sample region SRk. As a result, the field location labeled flk can represent the location along the location reference line (Lk) where the electronics determine that a surface of an object that reflects the system output signal is located. The collection of field locations in the field of view can serve as a point cloud.


The field locations (flk) that are associated with a distance, Rk, can also be associated with a radial velocity Vk where k represents the sample region index. The radial velocity Vk can be the radial velocity that the electronics generate for sample region with sample region index k.


The one or more orientation angle(s) (θk) and the distances Rk associated with each field location (flk) can effectively serve as polar coordinates or as spherical coordinates. As a result, the electronics can optionally convert the coordinates of the field locations flk to other coordinates systems including, but not limited to, Cartesian coordinates. As an example, FIG. 6D illustrates an example where multiple different field locations in the sample regions of the range and velocity core C1. Additionally, the axes of a Cartesian coordinate system are transposed on the field of view. As a result, the positions of the field locations flk are also shown relative to the Cartesian coordinate system. The electronics can optionally use the orientation angle (θk) and the distance Rk associated with each field location (flk) to convert the field location (flk) to the other coordinate systems.


In some instances, a field location (flk) is not present in all or a portion of the sample regions. For instance, when an object is not present in a sample region (SRk), a beating signal is generally not produced and LIDAR data is not generated for that sample region. As a result, a portion of the sample region indices may not be associated with a field location (flk).


When a core is a range and velocity core, all or a portion of the sample regions illuminated by that core (range and velocity sample regions, RV sample regions) can each be associated with a field location, coordinates, distances Rk, and a radial velocity Vk. In contrast, when a core is a velocity core, all or a portion of the sample regions illuminated by that core (velocity sample regions, V sample regions) can each be associated with a field location, coordinates, and a radial velocity Vk. FIG. 6B illustrates the coordinates as two-dimensional Cartesian coordinates (x,y) although three-dimensional coordinates and/or other coordinates systems are possible. In some instances, the coordinates are the polar coordinates or spherical coordinates of the field locations (flk).



FIG. 7 is a two-dimensional illustration of the field of view for the LIDAR system. For instance, FIG. 7 can represent a projection of a three-dimensional field of view onto a two-dimensional plane. The field of view includes multiple rectangles that can each represent the projection of one of the sample regions in the field of view onto the plane. The sample regions are shown as rectangular for the purposes of illustration and can have other geometries. The sample regions are each labeled SRk where k is from 1 to 30.


The sample regions are each labeled Ci where i is a core index that identifies the core that illuminated the sample region. The core index is an integer can extend from i=1 to i=M where M is the number of system output signals output by the LIDAR system. The labels Ci indicate which of the system output signals illuminated the sample region. For instance, the sample regions labeled C1 were illuminated by the range and velocity labeled C1 in FIG. 6A. Accordingly, the sample regions labeled C1 are RV sample regions. In contrast, the sample regions labeled C2 were illuminated by the velocity core labeled C2; the sample regions labeled C3 were illuminated by the velocity core labeled C3; and the sample regions labeled C4 were illuminated by the velocity core labeled C4. As a result, the sample regions labeled C2-C4 are V sample regions.



FIG. 7 includes an arrow labeled A. The arrow indicates the sequence that the system output signals are scanned through the field of view. Additionally, the sample region indices (k) are assigned to the sample regions chronologically. Accordingly, the sample region labeled with C2 and SR11 was scanned by the velocity core C1 before the sample region labeled with C2 and SR12 was scanned by the velocity core C2.


The LIDAR system includes electronics configured to operate the LIDAR system. The electronics can include local electronics 32 from one or more cores and common electronics 280. The common electronics 280 can be positioned on the common support 140 as shown in FIG. 6A or can be located off of the common support 140. The common electronics 280 can be in the same physical location and/or housing as the local electronics associated with different cores or can be in different locations and/or housings from the local electronics 32 associated with different cores.


The common electronics 280 can be in electrical communication with the local electronics 32 associated with different cores. For instance, the common electronics 280 can include a LIDAR data generator 291 in electrical communication with the local electronics 32 from different cores. FIG. 8 illustrates the common electronics 280 having a LIDAR data generator 291 configured to receive the preliminary LIDAR data generated by the preliminary LIDAR data generators 269 in different cores.


The electrical communication between the common electronics 280 and the local electronics 32 associated with different cores allows the common electronics 280 to access preliminary LIDAR data generated from different cores. For instance, the common electronics 280 can access the range and or velocity that the local electronics 32 associated with a range and velocity core calculates for a sample region and/or can access the velocity that the local electronics 32 associated with a velocity core calculates for a sample region. The LIDAR data generator 291 can combine preliminary LIDAR data generated by different cores to calculate LIDAR data for one of the sample regions. Additionally or alternately, The LIDAR data generator 291 can combine preliminary LIDAR data generated for different sample regions by the same core to calculate LIDAR data for one of the sample regions.


The preliminary LIDAR data that the electronics generate for a sample region can include isolated LIDAR data and shared LIDAR data. Isolated LIDAR data can be LIDAR data that the local electronics calculate for a sample region where the one or more beat frequencies used to calculate the isolated LIDAR data result from the illumination of that sample region by the system output signal. As a result, the illumination of another sample region by a system output signal is not needed to calculate the isolated LIDAR data for a sample region. As a result, the radial velocity (Vk) for a V sample region with sample region index k can be calculated without illuminating another one of the sample regions with a system output signal. However, the LIDAR data generator 291 can combine the isolated LIDAR data and/or shared LIDAR data from different cores and/or that result from illuminating different sample regions so as to approximate the shared LIDAR data. As a result, shared LIDAR data for a sample region can be calculated by combining the beat frequencies that result from illuminating different sample regions. As an example, the range (Rk) for a V sample region can be calculated by combining the beat frequencies from multiple different sample regions. Accordingly, the range (Rk) calculated for a V sample region can be shared LIDAR data while the radial velocity (Vk) for the same V sample region can be isolated LIDAR data. In contrast, the range (Rk) and radial velocity (Vk) for the same V sample region can both be isolated LIDAR data.


An example of calculating shared LIDAR data is calculating the range (Rk) for a subject V sample region by interpolating or extrapolating the coordinates of the field location for the V sample region from the coordinates of the field locations associated with RV sample regions. When the coordinates are polar coordinates or spherical coordinates, the range (distance) for subject V sample region k (Rk) can be extracted directly from the coordinates. Alternately, a range (distance) for V sample region k (Rk) can be calculated from the coordinates. For instance, when the coordinates are Cartesian coordinates, the coordinates can be converted to polar coordinates or spherical coordinates and the desired range extracted. In some applications of the LIDAR system, the desired data for the V sample region is the coordinates of the field location for that sample region rather than the range. As a result, in some applications, the range is not calculated for all or a portion of the V sample regions for which coordinates are calculated. The range and/or coordinates estimated for a V sample region can serve as location data for the V sample region. Accordingly, interpolation and extrapolation provide an estimate of the location data of an object in a subject sample region. The interpolation or extrapolation is made relative to coordinates and/or ranges from multiple RV sample regions. The field locations in the RV sample regions from which the location data for a V region is calculated serve as the known points in an interpolation or extrapolation algorithm.


The LIDAR data generator 291 can identify a subject sample region for which the coordinates and/or a range is to be interpolated or extrapolated and can identify potential sample regions that include potential data points from which the coordinates and/or range is to be interpolated or extrapolated. As an example, FIG. 9 shows the field of view from FIG. 7 where the sample region labeled SR6 for core C4 is identified as the subject sample region. The potential data points include the field locations in RV sample regions that are physically located near the subject sample region. Since the core labeled C1 is the only range and velocity core in the LIDAR system of FIG. 6A, each of the potential data points is labeled C1.


The LIDAR data generator 291 can apply an identification algorithm to identify the potential data points. The identification algorithm can associate a particular group of RV sample regions with each of the different possible subject sample regions. The association can be specific to each of the possible subject sample regions or can be general to multiple different possible subject sample regions. As an example of a specific association, the identification algorithm can associate the group of potential data points shown in FIG. 9 with the specific subject sample region shown in FIG. 9. As an example of a general association, the identification algorithm can associate the pattern of potential data points shown in FIG. 9 with multiple different subject sample regions that are each positioned in the same row of sample regions as the subject sample region of FIG. 9.


In another example of suitable identification algorithms, the potential data points can be identified by application of an identification criteria to the subject sample region and the adjacent RV sample regions. For instance, each of the RV sample regions that has a field locations are Rfl of the field location in the subject sample region can each be identified as a potential data point where Rfl is a constant.


The LIDAR data generator 291 can compare the LIDAR data at the potential data points to one or more criteria to determine which of the potential data points is suitable for use as a known data point. In some instances, the one or more criteria include or consist of velocity criteria that are a function of the radial velocities associated with the potential data points. For instance, the LIDAR data generator 291 can compare the LIDAR data associated with the subject sample region and the data associated with each of the potential data points to one or more velocity criteria to determine whether the potential data point is suitable for use as a known data point.


The velocity criteria can be selected such that the criteria are satisfied when the same object is likely positioned in the subject sample region and in the sample region associated with a potential data point. When an object is positioned in a subject sample region and the sample region associated with a potential data point, the radial velocity calculated for these sample regions is likely the same or similar. For instance, an example application of velocity criteria can include comparing a variable that is a function of the radial velocity calculated for the subject sample region and the velocity associated with the potential data point to a threshold. The threshold can be a constant or can be function of the radial velocity calculated for the subject sample region and/or the radial velocity calculated for the potential data point. As an example, the difference between the radial velocity associated with the subject sample region and the radial velocity associated with the potential data point can be compared to a velocity threshold. When the difference is less than the velocity threshold, the velocity criteria is satisfied but when the difference is greater than or equal to the velocity threshold the velocity criteria is not satisfied. Another example of applying velocity criteria compares the percentage change from the radial velocity associated the subject sample region to the radial velocity associated with the potential data point to a velocity threshold. When the difference is less than the velocity threshold, the velocity criteria is satisfied but when the difference is greater than or equal to the velocity threshold the velocity criteria is not satisfied.


The potential data points that satisfy the one or more criteria are flagged as known data points while the potential data points that do not satisfy the one or more criteria are not flagged as known data points. As an example, the field locations in the RV sample regions labeled F in FIG. 9 can be flagged as known data points while the field locations in the RV sample regions that are not labeled F are not flagged as known data points. The LIDAR data generator 291 uses the coordinates associated with the known data points in an interpolation algorithm or extrapolation algorithm to estimate the coordinates of a field location in the subject sample region. In contrast, the potential data points that are not flagged as known data points are not used in the interpolation algorithm or extrapolation algorithm. As a result, in some instances, the number of known data points used in an interpolation algorithm or an extrapolation algorithm changes in response to the radial velocities calculated for the potential data points.


Suitable interpolation algorithms include, but are not limited to, piecewise constant interpolation, linear interpolation, polynomial interpolation, spline interpolation, interpolation by function approximation, and Gaussian interpolation. Suitable extrapolation algorithms include, but are not limited to, linear extrapolation, polynomial extrapolation, conic extrapolation, and French curve extrapolation.



FIG. 10 is a process flow for generating range data for sample regions illuminated by a velocity core in a LIDAR system that includes range and velocity cores.


At process block 300, the local electronics for each of the cores generate the isolated LIDAR data for the sample regions scanned by that core. For instance, the preliminary LIDAR data generators 269 in the velocity cores calculate the radial velocity for each of the V sample regions scanned by the core. Additionally, the preliminary LIDAR data generators 269 in the range and velocity cores calculate the range and radial velocity for each of the RV sample regions scanned by the RV core. As noted above, the one or more orientation angle(s) (θk) associated with a sample region and the range or distance Rk calculated for the sample region can effectively serve as polar or spherical coordinates for a field location in that sample region. Alternately, the local electronics can optionally convert the polar or spherical coordinates to another coordinate system. Accordingly, the preliminary LIDAR data generators 269 associated with each of the range and velocity cores can calculate the radial velocity and coordinates of the field locations associated with the RV sample regions scanned by those cores.


At process block 302, the LIDAR data generator 281 can identify one of the V sample regions to serve as a subject sample region for which coordinates and/or a range are to be estimated. The field location in the subject sample region can serve as a subject field location.


The LIDAR data generator 281 can proceed from process block 302 to process block 304. At process block 304, the LIDAR data generator 281 can perform the identification algorithm so as to identify RV sample regions that contain field locations that may be suitable for serving as known data points. The identified RV sample regions and/or field locations can serve as potential data points.


The LIDAR data generator 281 can proceed from process block 304 to process block 306. At process block 306, the LIDAR data generator 281 can initialize the identified potential data points and/or the RV sample regions that include the identified potential data points. For instance, the LIDAR data generator 281 can flag each of the identified potential data points and/or the sample regions containing the identified potential data points as not being a known data point.


The LIDAR data generator 281 can proceed from process block 306 to process block 308. At process block 308, the LIDAR data generator 281 can compare the radial velocity associated with the subject field location (or the subject V sample region) and the radial velocity associated with a subject one of the potential data points (or the potential RV sample region) to the one or more velocity criteria so as to determine whether it is likely that the same object is positioned in the sample region associated with the subject field location and also in the sample region associated with the potential field location. This comparison can be repeated until each of the potential data points has served as the subject potential data point, until a pre-determined number of potential data points has served as the subject potential data point, or until a pre-determined number of potential data points has satisfied the one or more velocity criteria. The LIDAR data generator 281 can flag each of the potential data points that satisfies the one or more velocity criteria as a known data point.


The LIDAR data generator 281 can proceed from process block 308 to process block 310. At process block 310, the LIDAR data generator 281 can interpolate or extrapolate coordinates for the subject field location from the coordinates of the known data point. The potential data points that are not flagged as a known data point are not used in the interpolation or extrapolation.


In some instances, the interpolation or extrapolation of the coordinates includes an interpolation or extrapolation of the range associated with the subject field location. For instance, when the interpolation or extrapolation of the coordinates is done in polar coordinates or spherical coordinates, the range is one of the coordinate variables. Alternately, the range can be calculated from the coordinates for the subject field location. When the range is determined for a subject field location flk, the range can serve as the distance Rk for the V sample region SRk. As a result, the V sample region SRk can be associated with a radial velocity Vk. Accordingly, a V sample region that has served as the subject sample region can be associated with a field location, an estimated distance Rk, and a radial velocity Vk.


The LIDAR data generator 281 can proceed from process block 310 to determination block 312 where a determination is made whether the desired number of V sample regions has served as the subject sample region. It can be desirable for all or a portion of the sample regions in the field of view for the LIDAR system to serve as the subject sample region. In some instances, each of the V sample regions in the field of view for the LIDAR system is to serve as the subject sample region. As a result, in some instances, the common electronics can determine whether each of the each of the V sample regions in the field of view for the LIDAR system has served as the subject sample region. When the determination is negative, the common electronics can return to process block 302.


When the determination at determination block 310 is positive, the LIDAR data generator 281 has estimated distance (Rk) values for all or a portion of the V sample regions. The estimated distance (Rk) values are estimated from the beat frequencies of the composite signals that result from illuminating multiple different sample regions with a system output signal. As a result, the estimated distance (Rk) values can be considered shared LIDAR data. The estimated distance (Rk) values for the V sample regions can be added to the isolated LIDAR data for the RV regions and the isolated LIDAR data for the V regions to provide LIDAR data for the LIDAR system's field of view. As a result, all or a portion of the V sample regions in the field of view of the LIDAR system are associated with a field location, an estimated distance Rk, and a radial velocity Vk. Similarly, all or a portion of the RV sample regions in the field of view of the LIDAR system are associated with a field location, an estimated distance Rk, and a radial velocity Vk.


When the determination at determination block 310 is positive, the common electronics 280 proceed from determination block 312 to process block 314. At process block 314, the common electronics can perform further processing of the LIDAR data for the LIDAR system's field of view. The further processing is a function of the application of the LIDAR system. Examples of further processing applications include, but are not limited to, image generation, control of self-driving vehicles, point cloud generation, object recognition, and statistical analysis. The further processing can make use of all or a portion of the ranges, radial velocities, and field locations associated with the V sample regions and associated with the RV sample regions in the field of view for the LIDAR system.


The LIDAR core constructions disclosed above are examples and other LIDAR core constructions can be used. As an example, FIG. 11A through FIG. 11C illustrate an example of a LIDAR core suitable for use as a velocity core. FIG. 11A is a topview of the LIDAR core of FIG. 1A modified to include a frequency shifter 298 positioned along the utility waveguide 12. The electronics can operate the frequency shifter 298 so as to create a frequency offset between the LIDAR output signal and the reference signal. Since the system output signal includes or consists of light from the LIDAR output signal, there is also a frequency offset between the system output signal and the reference signal. Suitable frequency shifters include, but are not limited to, an acousto-optic frequency shifter. The frequency shifter can be integrated into the LIDAR chip or into the LIDAR core or can be a separate electro-optical component mounted on the LIDAR chip or the LIDAR core using technologies such as flip-chip mounting technologies.


The LIDAR core of FIG. 11A can provide an unambiguous solution for the beat frequency without the second data signal and accordingly without the imaginary component of complex signal. As a result, the signal processor used in conjunction with the LIDAR core of FIG. 11A need not generate the second composite signal. Accordingly, the signal processor used in conjunction with the LIDAR core of FIG. 11A can be used with the signal processor of FIG. 5A modified to exclude the components needed to generate and process the second composite signal as shown in FIG. 11B. Similarly, the relationship between the electronics, the first light sensor 223 and the second light sensor 224 can be constructed as disclosed in the context of FIG. 5B but with the exclusion of the first auxiliary light sensor 218 and the second auxiliary light sensor 220 as illustrated in FIG. 11C. As is evident from FIG. 1C, the transformer 268 receives the first data signal but not the second data signal. As a result, the first data signal serves as a real data signal that is received by the transformer 268. A LIDAR core having a transformer 268 that receives a real data signal can be considered a real data signal LIDAR core.


The beat frequency finder 238 includes a transformer 268 that receives the first digital data signal from the first Analog-to-Digital Converter (ADC) 264 as an input. The transformer 268 can be configured to perform a mathematical transform on the first digital data signal so as to convert from the time domain to the frequency domain. The mathematical transform can be a real transform such as a Fourier Transform. The local electronics use the one or more frequency peaks output from the transformer 268 for further processing to generate the LIDAR data (distance and/or radial velocity between the reflecting object and the LIDAR chip or LIDAR core). The local electronics 32 can include a peak finder (not shown) to identify the beat frequencies of the one or more frequency peaks. The transformer 268 and/or the preliminary LIDAR data generator 269 can execute the attributed functions using firmware, hardware or software or a combination thereof.


The local electronics 32 can include a preliminary LIDAR data generator 269 configured receive the beat frequencies from the transformer 268. The preliminary LIDAR data generator 269 is configured to generate preliminary LIDAR data for a sample region. LIDAR data for a sample region includes values for the radial velocity and/or separation distance between the LIDAR system and an object in the sample region. In some instances, the LIDAR data includes an indicator that the LIDAR data for that sample region is “not available.” In some instances, the preliminary LIDAR data generator 269 calculates LIDAR data that serves as the preliminary LIDAR data. For instance, in some instances, the preliminary LIDAR data generator 269 calculates the radial velocity and/or separation distance between the LIDAR system and an object in the sample region as the preliminary LIDAR data. As will be discussed below, other examples of preliminary LIDAR data that can be calculated by the preliminary data generator include, but are not limited to, possible solutions for the LIDAR data for the sample region and a potential radial velocity magnitude indicator for the sample region.


Different cores having local electronics 32 configured as disclosed in the context of FIG. 11C can be in electrical communication with common electronics 280 as disclosed in the context of FIG. 8. Alternately, one or more cores having local electronics 32 configured as disclosed in the context of FIG. 11C and one or more cores having local electronics configured as disclosed in the context of FIG. 5B can be in electrical communication with common electronics 280 as disclosed in the context of FIG. 8.


The system output signal from the LIDAR core of FIG. 11A can be a continuous wave (CW). For instance, the outgoing LIDAR signal, and accordingly the system output signal, can be an unchirped continuous wave (CW). As an example, the outgoing LIDAR signal, and accordingly the system output signal, can be represented by Equation 2: G*cos(H*t) where G and H are constants and t represents time. In some instances, G represents the square root of the power of the outgoing LIDAR signal. Local electronics, such as the preliminary LIDAR data generator 269, can approximate the radial velocity between the reflecting object and the LIDAR chip (v) using Equation 5: v=c*fft/(2*(fc+fos)) where fft represents the peak frequency output from the transformer 268, c is the speed of light, fc represents the frequency of the LIDAR output signal and accordingly the system output signal, and fos represents the offset between the frequency of the system output signal and the reference signal. In many instances, fc>>fos and the equation for radial velocity can be approximated as v=c*fft/(2*fc). As a result, multiple data periods and/or chirping are not needed to generate the radial velocity.


The LIDAR cores illustrated in any of FIG. 1A through FIG. 1C can also be operated as a range and velocity core when used in conjunction with the signal processor of FIG. 11B in combination with the local electronics of FIG. 11C to provide a real data signal core. For instance, the frequency versus time pattern of FIG. 5C can represent the frequency versus time pattern suitable for use with a real data signal core constructed according to FIG. 1A operated as a range and velocity core. In this instance, the frequency peak output from the Fourier transform represents the beat frequency of the composite signals that each includes a comparative signal beating against a reference signal. However, a real Fourier transform outputs a positive beat frequency and a negative beat frequency with the same magnitude. It can be ambiguous as to which beat frequency represents the correct value for the beat frequency of the composite signals. As a result, there are multiple possible solutions for the LIDAR data for an RV sample region illuminated by a system output signal from a LIDAR core constructed according to FIG. 1A through FIG. 11C and operated as a range and velocity core.


The beat frequencies from two or more different data periods that are associated with the same sample region can be combined to generate possible LIDAR data solutions for an RV sample region illuminated by a system output signal from a LIDAR core constructed according to any of FIG. 1A through FIG. 1C and operated as a range and velocity core. For instance, the beat frequency determined from DP1 during the illumination of sample region SRk can be combined with the beat frequency determined from DP2 during the illumination of sample region SRk to determine possible LIDAR data solutions for RV sample region SRk.


As noted above, beat frequency of the composite signal during a data period where the frequency of the outgoing LIDAR signal increases can be represented by fub and the beat frequency of the composite signal during a data period where the frequency of the outgoing LIDAR signal decreases can be represented by fdb. The contribution of the range between the LIDAR system and the object to these beat frequencies can be represented by fr where fr=2*αuRk/c where αu represents a chirp rate (αm) for the data period where the frequency of the system output signal increases with time (α1 in this case), Rk represents the distance between the LIDAR system and an object in sample region SRk, and c represents the speed of light. The contribution of the Doppler effect to these beat frequencies can be represented by fd=2Vkfc/c where G represents the base frequency (fo), Vk is the radial velocity between the reflecting object in sample region k and the LIDAR system with radial velocity assumed to be positive when the object is moving toward the LIDAR system.


There are three possible LIDAR data solutions for the combination of fr and fd values for an RV sample region. For instance, fr=(fub+fdb)/2 and fd=(fdb−fub)/2 can serve as a first solution. A second solution can be fr=(fdb−fub)/2 and fd=(fdb+fub)/2. A third solution can be fr=(fub−fdb)/2 and fd=−(fdb+fub)/2. As noted above, a real Fourier transform outputs a positive beat frequency and a negative beat frequency with the same magnitude. As a result, in these LIDAR data solutions, fub represents the magnitude of the peak frequency output from the transformer 268 and fd represents the magnitude of the peak frequency output from the transformer 268. As noted above, the values of fd and fr are directly related to the radial velocity (VK) and the range (Rk) for the RV sample region by fd=2Vkfc/c and fr=2*αu*Rk/c respectively. As a result, each of the possible LIDAR data solution for the RV sample region SRk can have all or a portion of the components selected from the group consisting of an fr value, an fd value, a radial velocity (VK) value and a range (Rk) value. In some instances, each of the possible LIDAR data solutions includes at least a possible fr value, a possible fd value. In some instances, each of the possible LIDAR data solutions includes at least a possible radial velocity (VK) value and a possible range (Rk) value for the RV sample region SRk.



FIG. 12A illustrates an example of process flow that the common electronics can use to identify the correct LIDAR data solution for a subject RV sample region. At process block 320, multiple possible LIDAR data solutions are calculated. For instance, a preliminary LIDAR data generator 269 can calculate a possible value of fr and/or possible value of the range (Rk) for the first solution, the second solution, and the third solution. At process block 321, the preliminary LIDAR data generator 269 can identify candidate LIDAR data solutions from among the possible LIDAR data solutions. For instance, the electronics can identify candidate fr values from among the possible fr values and/or candidate Rk values from among the possible Rk values. The value of Rk is positive by definition. As a result, the value of fr is also positive. However, the value of fr from the second solution is the negative of the fr value from the third solution. As a result, the possible fr and/or Rk values will include one or more negative values. The possible fr values and/or the Rk values with negative values can be removed from the pool of candidates. As a result, each of the possible fr values with a positive value can serve as a candidate fr value. Additionally or alternately, each of the possible Rk values with a positive value can serve as a candidate Rk value.


At process block 322, the LIDAR data generator 281 identify the correct solution. For instance, when each of the possible LIDAR data solutions does not already include a radial velocity (Vk), a radial velocity (Vk) can be calculated for each solution using the fd value associated with the solution and fd=2Vkfc/c. The Vk results from the different solutions can be compared to the radial velocity result from one or more V sample regions to identify the correct Vk result. The LIDAR data solution that includes the identified Vk result can be selected as the correct LIDAR data solution. Accordingly, the Vk value of the selected LIDAR data solution can be selected as the correct Vk value and Rk value.


At process block 323, the LIDAR data generator 281 can determine the LIDAR data for the RV sample region associated with sample region index k. For instance, when the selected possible LIDAR data solution does not already include a range (RK), the range (RK) can be calculated for the selected LIDAR data solution using the fr value associated with the selected LIDAR data solution and fr=2*αu* Rk/c. The values of Vk and Rk associated with the selected LIDAR data solution can serve as shared LIDAR data for the RV sample region SRk. Accordingly, the LIDAR data for the RV sample region associated with sample region index k can include or consist of the values of Vk and Rk selected in process block 322.


A LIDAR system can include one or more range and velocity cores that are real data signal cores and one or more velocity cores that are real data signal cores. As an example, FIG. 12B illustrates a LIDAR system that includes multiple different real data signal cores on a common support 140. One or more of the LIDAR cores can be a velocity core constructed as disclosed in the context of FIG. 11A through FIG. 11C and one or more of the LIDAR cores can be a range and velocity core constructed as disclosed in the context of FIG. 1A through FIG. 1C with a signal processor of FIG. 11B in combination with the electronics of FIG. 11C. For the purpose of illustration, the LIDAR cores in the LIDAR system of FIG. 12B include two velocity cores 325 constructed as disclosed in the context of FIG. 11A through FIG. 11C and also includes two range and velocity cores 324 constructed as disclosed in the context of any of FIG. 1A through FIG. 1C with a signal processor of FIG. 11B in combination with the electronics of FIG. 11C. The range and velocity cores 324 alternate with the velocity cores 325.



FIG. 12C is a two-dimensional illustration of the field of view for the LIDAR system of FIG. 12B. For instance, FIG. 12C can represent a projection of a three-dimensional field of view onto a two-dimensional plane. The field of view includes multiple rectangles that can each represent the projection of one of the sample regions in the field of view onto the plane. The sample regions are shown as rectangular for the purposes of illustration and can have other geometries. The sample regions are each labeled SRk where k is from 1 to 30.


The sample regions are each labeled C1 where i is a core index that identifies the core that illuminated the sample region. The core index is an integer can extend from i=1 to i=M where M is the number of system output signals output by the LIDAR system. The labels C1 indicate which of the system output signals illuminated the sample region. For instance, the sample regions labeled C1 were illuminated by the range and velocity core labeled C1 in FIG. 12B and the sample regions labeled C3 were illuminated by the range and velocity core labeled C3 in FIG. 12B. Accordingly, the sample regions labeled C1 and C3 are RV sample regions. In contrast, the sample regions labeled C2 were illuminated by the velocity core labeled C2 and the sample regions labeled C4 were illuminated by the velocity core labeled C4. As a result, the sample regions labeled C2 and C4 are V sample regions.



FIG. 12C includes an arrow labeled A. The arrow indicates the sequence that the system output signals are scanned through the field of view. Additionally, the sample region indices (k) are assigned to the sample regions chronologically. Accordingly, the sample region labeled with C2 and SR11 was scanned by the velocity core C1 before the sample region labeled with C2 and Situ was scanned by the velocity core C2.


The local electronics 32 can be local electronics in that they are specific to each of the cores on the LIDAR system. The LIDAR system can include common electronics 280 in addition to the local electronics 32. The common electronics 280 can be positioned on the common support 140 as shown in FIG. 12C or can be located off of the common support 140. The common electronics 280 can be in the same physical location and/or housing as the local electronics associated with different cores or can be in different locations and/or housings from the local electronics 32 associated with different cores.


The LIDAR data generator 281 can combine possible LIDAR data solutions with isolated LIDAR data from one or more V sample regions so as to select the correct LIDAR data solution as disclosed in the context of FIG. 12A. In some instances, the RV sample region associated with the possible LIDAR data solutions and the one or more V sample regions can be illuminated by a system output signal that includes light from the same core or from different cores.


To identify the correct possible LIDAR data solution, the possible LIDAR data solutions for a subject RV sample region can be compared to the radial velocity calculated for one or more V sample regions that each serves as a reference V sample region. All or a portion of the reference V sample regions and the subject sample region can be illuminated by the same cores or by different cores. For instance, the electronics can identify a subject RV sample region for which a correct LIDAR data solution is to be identified. FIG. 12D shows the field of view from FIG. 12C where the RV sample region that is labeled SR15 and is illuminated by a system output signal that includes light from velocity core C4 is identified as the subject RV sample region.


One or more V sample regions can each be identified as a reference V sample region. The possible LIDAR data solutions for the subject RV sample region can be compared to the radial velocity (Vk) calculated for each of the one or more reference V sample regions so as to identify the correct LIDAR data solution for the subject RV sample region from among the possible LIDAR data solutions for the subject RV sample region. In some instances, the V sample region that is physically closest to the subject RV sample region is identified as the only reference V sample region. As an example, in FIG. 12D, the V sample region that is labeled SR5 and is illuminated by a system output signal that includes light from velocity core C1 is identified as the only reference V sample region.


The LIDAR data generator 281 can employ one or more solution identification criteria to identify the correct LIDAR data solution for the subject RV sample region from among the possible LIDAR data solutions for the subject RV sample region. An example solution identification criterion includes a proximate velocity criterion where the possible LIDAR data solution having the radial velocity that is closest to the radial velocity of one or more of the reference V sample regions is identified as the correct LIDAR data solution. For instance, the V sample region that is physically closest to the subject RV sample region can be identified as the only reference V sample region and the possible LIDAR data solution having the radial velocity Vk that is closest to the radial velocity of one or more of the reference V sample regions can be selected as the correct LIDAR data solution for the subject RV sample region. Accordingly, the radial velocity Vk and distance Rk associated with the LIDAR data solution can serve as the LIDAR data for the subject RV sample region.


The LIDAR data generator 281 can apply other identification criteria in addition or as an alternative to the proximate velocity criteria. Another example of an identification criterion can be a range criterion where the radial velocity Vk for each of the identified reference V sample region is within a range of radial velocity values. Examples of suitable ranges for radial velocity values include, but are not limited to, the radial velocity of the subject RV sample region +/−10%, 20%, or 30% of the radial velocity of the subject RV sample region. In the event that the radial velocity (Vk) for one of the identified reference V sample regions falls outside of the range for radial velocity values, the reference V sample region can be removed from the identified reference V sample regions. In the event that none of the identified reference V sample regions has a radial velocity that falls within the range for radial velocity values, the LIDAR data for the subject RV sample region can be classified as unavailable. In some instances, the LIDAR data generator 281 applies both a range criterion and a proximate velocity criteria to a subject RV sample region.



FIG. 12E illustrates a process flow for an example process for generating LIDAR data for the sample regions illuminated by a LIDAR system having real data signal cores such as the LIDAR system illustrated in FIG. 12B. At process block 326, the field of view for the LIDAR system is scanned.


At process block 327, the isolated LIDAR data for each of the V sample regions can be calculated. As noted above, the isolated LIDAR data for a sample region is the LIDAR data that can be calculated without use of a composite signal that results from illuminating a different sample region with a system output signal. For instance, to generate the isolated LIDAR data, the preliminary LIDAR data generator 269 for each of the velocity cores can calculate the radial velocity for each of the V sample regions illuminated by the velocity core.


At process block 328, the preliminary LIDAR data generator 269 for each of the range and velocity cores can calculate the possible LIDAR data can for each of the RV sample regions illuminated by one of the range and velocity cores. For instance, the preliminary LIDAR data generator 269 for each of the range and velocity cores can calculate the possible LIDAR data solutions for each of the RV sample regions illuminated by one of the range and velocity cores as disclosed in the context of FIG. 12A.


At process block 329, the correct LIDAR data solution can be selected for each of the RV sample regions illuminated by each of the range and velocity core. For instance, the LIDAR data generator 281 can select the correct LIDAR data solution for an RV sample region from among the possible LIDAR data solutions as disclosed on the context of FIG. 12A. The radial velocity (Vk) and the distance (Rk) associated with the selected LIDAR data solution can be considered shared LIDAR data that serves as all or a portion of the LIDAR data for the subject RV sample region SRk.


The LIDAR data for the RV sample regions can serve as the range and radial velocity for each of the RV sample regions scanned by the RV cores in process block 300 of FIG. 10. Accordingly, the preliminary LIDAR data generators 269 or the LIDAR data generators 281 can calculate the radial velocity and coordinates of the field locations associated with the RV sample regions scanned by those cores as disclosed in the context of process block 300. Additionally, the isolated LIDAR data for the V sample regions can serve as the radial velocity for each of the V sample regions scanned by the V cores in process block 300. As a result, in some instances, the remainder of the process illustrated in FIG. 10 is executed so as to interpolate and/or extrapolate the range (Rk) for each of the V sample regions illuminated by the velocity core. As an example, at process block 310, the LIDAR data generator 281 can combine the range data (Rk) in the selected LIDAR data solutions from different RV sample regions so as to approximate the range data (Rk) for each of the V sample regions illuminated by one of the velocity cores. For instance, the LIDAR data generator 281 can interpolate and/or extrapolate the range data (Rk) for each of the V sample regions from the coordinates and/or range data (Rk) from different RV sample regions as disclosed above. The LIDAR data for the sample regions of the field of view can include or consist of the LIDAR data for the RV sample regions, the isolated LIDAR data for each of the V sample regions (Vk) and the approximated range data (Rk) for each of the V sample regions.


The fields of view illustrated in FIG. 7 and FIG. 12B illustrate the sample regions as spaced apart from one another, however, the sample regions can partially or fully overlap. For instance, all or a portion of the V sample regions can partially or fully overlap an RV sample region or be partially or fully overlapped by an RV sample region. As a result, all or a portion of the sample regions can be concurrently illuminated by a system output signal that includes light from an RV core and illuminated by a system output signal that includes light from a V core.


Although the LIDAR system of FIG. 12B is disclosed as having real data signal cores where the velocity cores constructed as disclosed in the context of FIG. 11A through FIG. 11C, a portion of the cores can be complex data signal cores. For instance, the velocity cores can be constructed as disclosed in any of FIG. 1A through FIG. 5C. The range and velocity cores 324 alternate with the velocity cores 325.


The real data signal cores that are disclosed in the context of FIG. 11A through FIG. 11C and that serve as V cores include a frequency shifter. However, the frequency shifter can be removed. For instance, the velocity cores in FIG. 12B can be constructed as disclosed in any of FIG. 1A through FIG. 1C with a signal processor of FIG. 11B in combination with the electronics of FIG. 11C to provide a real data signal core that can be operated as a V core. For instance, the system output signal can be a continuous wave (CW). As an example, the outgoing LIDAR signal, and accordingly the system output signal, can be an unchirped continuous wave (CW). In some instances, the outgoing LIDAR signal, and accordingly the system output signal, can be represented by Equation 2: G*cos(H*t) where G and H are constants and t represents time. In some instances, G represents the square root of the power of the outgoing LIDAR signal. In these instances, the real Fourier transform outputs frequency peaks at a positive beat frequency and a negative beat frequency with the same magnitude. For instance, the real Fourier transform can outputs frequency peaks at a beat frequency value of fd and −fd where fd represents the Doppler frequency shift and can be expressed as fd=2Vkfc/c with fc representing the frequency of the continuous wave, c representing the speed of light, Vk being the radial velocity between the LIDAR system and a reflecting object in sample region SRk where the direction from the reflecting object toward the LIDAR system is assumed to be the positive direction. Since, it can be ambiguous as to which beat frequency represents the correct value for the beat frequency of the composite signals there are two possible radial velocity solutions, one at Vk and one at −Vk. As a result, there are multiple potential solutions for the radial velocity for a V sample region illuminated by a system output signal from a real data signal core constructed as disclosed in any of FIG. 1A through FIG. 1C.



FIG. 13A illustrates an example of process flow that the electronics can use to identify the correct radial velocity solution for a V sample region SRk when LIDAR data solutions are available from RV sample regions. At process block 340, a preliminary LIDAR data generator 269 calculates a velocity magnitude indicator from a radial velocity indicator for sample region SRk. The magnitude of a potential radial velocity indicator indicates the magnitude of the radial velocity between the LIDAR system and an object in V sample region SRk. For instance, the magnitude of a potential radial velocity indicator sets the magnitude of the radial velocity. As a result, the radial velocity indicator can be a calculation of the radial velocity (Vk) value provided by fd=2Vkfc/c where fd can represent either of the peak frequencies output from the Fourier transform. Since either of the Doppler frequency shifts fd is directly related to the magnitude of the radial velocity by |fd|=|2Vkfc/c|, either of the beat frequencies (fd), or Doppler frequency shifts (fd), can also serve as a radial velocity indicator. The a preliminary LIDAR data generator 269 can identify the magnitude of the radial velocity indicator as the velocity magnitude indicator.


At process block 342, the LIDAR data generator 281 can identify the direction of the radial velocity indicator for V sample region SRk. For instance, the velocity magnitude indicator for V sample region SRk can be compared to a comparative component of LIDAR data solutions from one or more RV sample regions to identify the direction of the radial velocity indicator. As an example, when the velocity magnitude indicator for V sample region SRk matches the magnitude of a possible radial velocity solution from a neighboring RV sample region, it is likely the system output signals that illuminated the different sample regions were incident on the same object. As a result, the direction of the possible radial velocity solution that matched the velocity magnitude indicator is assigned to the velocity magnitude indicator to provide a directional radial velocity indicator. Accordingly, the directional radial velocity indicator has the magnitude of the velocity magnitude indicator and the direction of the comparative component from the matched LIDAR data solution. As an example where the radial velocity indicator for V sample region SRk is a radial velocity (Vk) calculation that provides a velocity magnitude indicator of 20 mph and matches with a possible radial velocity solution of −20 mph, the negative value of the possible radial velocity solution is assigned to the velocity magnitude indicator to provide a directional radial velocity indicator value of −20 mph.


At process block 344, the LIDAR data generator 281 can determine the radial velocity for V sample regions SRk. When the velocity magnitude indicator is the magnitude of a radial velocity calculation, the directional radial velocity indicator can serve as the radial velocity for sample regions SRk. When the velocity magnitude indicator is the magnitude of something other than a radial velocity calculation, the radial velocity can be calculated from the directional radial velocity indicator. For instance, when the velocity magnitude indicator is the magnitude of either of the beat frequencies (fd), the radial velocity (Vk) can be calculated from Cfd=2Vkfc/c where Cfd represents the directional radial velocity indicator.


One or more of the LIDAR cores in the LIDAR system of FIG. 12B can be constructed as disclosed in the context of FIG. 1A through FIG. 1C with a signal processor of FIG. 11B in combination with the electronics of FIG. 11C and operated as a range and velocity core while one or more of the LIDAR cores in the LIDAR system is constructed as disclosed in any of FIG. 1A through FIG. 1C with a signal processor of FIG. 11B in combination with the electronics of FIG. 11C and operated as a V core. As an example, FIG. 13B illustrates a LIDAR system that includes multiple different real data signal cores on a common support 140. One or more of the LIDAR cores can be a velocity core constructed as disclosed in any of FIG. 1A through FIG. 1C with a signal processor of FIG. 11B in combination with the electronics of FIG. 11C and one or more of the LIDAR cores can be a range and velocity core constructed as disclosed in the context of FIG. 1A through FIG. 1C with a signal processor of FIG. 11B in combination with the electronics of FIG. 11C. For the purpose of illustration, the LIDAR cores in the LIDAR system of FIG. 13B include two velocity cores 325 and two range and velocity cores 324. The range and velocity cores 324 alternate with the velocity cores 325.



FIG. 13C is a two-dimensional illustration of the field of view for the LIDAR system of FIG. 13B. For instance, FIG. 13C can represent a projection of a three-dimensional field of view onto a two-dimensional plane. The field of view includes multiple rectangles that can each represent the projection of one of the sample regions in the field of view onto the plane. The sample regions are shown as rectangular for the purposes of illustration and can have other geometries. The sample regions are each labeled SRk where k is from 1 to 30.


The sample regions are each labeled Ci where i is a core index that identifies the core that illuminated the sample region. The core index is an integer can extend from i=1 to i=M where M is the number of system output signals output by the LIDAR system. The labels C1 indicate which of the system output signals illuminated the sample region. For instance, the sample regions labeled C1 were illuminated by the range and velocity core labeled C1 in FIG. 13B and the sample regions labeled C3 were illuminated by the range and velocity core labeled C3 in FIG. 13B. Accordingly, the sample regions labeled C1 and C3 are RV sample regions. In contrast, the sample regions labeled C2 were illuminated by the velocity core labeled C2 and the sample regions labeled C4 were illuminated by the velocity core labeled C4. As a result, the sample regions labeled C2 and C4 are V sample regions.



FIG. 13C includes an arrow labeled A. The arrow indicates the sequence that the system output signals are scanned through the field of view. Additionally, the sample region indices (k) are assigned to the sample regions chronologically. Accordingly, the sample region labeled with C2 and SR11 was scanned by the velocity core C1 before the sample region labeled with C2 and Situ was scanned by the velocity core C2.


The local electronics 32 can be local electronics in that they are specific to each of the cores on the LIDAR system. The LIDAR system can include common electronics 280 in addition to the local electronics 32. The common electronics 280 can be positioned on the common support 140 as shown in FIG. 13C or can be located off of the common support 140. The common electronics 280 can be in the same physical location and/or housing as the local electronics associated with different cores or can be in different locations and/or housings from the local electronics 32 associated with different cores.


The common electronics can be in electrical communication with the local electronics 32 associated with different cores as disclosed in the context of FIG. 8. As a result, the common electronics 280 can access LIDAR data generated from different cores. For instance, the common electronics 280 can access the preliminary LIDAR data that the local electronics 32 calculate for the sample regions.


The electronics can combine possible LIDAR data solutions with a radial velocity indicator and/or velocity magnitude indicator from one or more V sample regions so as to identify the correct LIDAR data for the RV sample regions and the V sample regions in the field of view.


To identify the correct possible LIDAR data solution for an RV sample region and the correct direction for a radial velocity indicator, the electronics can compare possible LIDAR data solutions for a subject RV sample region and the radial velocity magnitude indicator for one or more V sample regions that each serves as a reference V sample region.


All or a portion of the reference V sample regions and a subject RV sample region can be illuminated by the same cores or by different cores. For instance, the electronics, such as the LIDAR data generator 281, can identify a subject RV sample region for which a correct one of the LIDAR data solutions is to be identified. FIG. 13D shows the field of view from FIG. 13C where the RV sample region that is labeled SR15 and is illuminated by a system output signal that includes light from velocity core C4 is identified as the subject RV sample region.


The LIDAR data generator 281 can identify one or more V sample regions as a reference V sample region. The LIDAR data generator 281 can compare possible LIDAR data solutions for the subject RV sample region to the velocity magnitude indicator calculated for each of the one or more reference V sample regions so as to identify the correct LIDAR data solution for the subject RV sample region from among the possible LIDAR data solutions for the subject RV sample region. In some instances, the V sample region that is physically closest to the subject RV sample region is identified as the only reference V sample region. As an example, in FIG. 13D, the V sample region that is labeled SR5 and is illuminated by a system output signal that includes light from velocity core C1 is identified as the only reference V sample region.


The LIDAR data generator 281 can employ one or more solution identification criteria to identify the correct LIDAR data solution for the subject RV sample region from among the possible LIDAR data solutions for the subject RV sample region and/or to identify the correct direction for a direction of the radial velocity indicator for a reference V sample region. An example identification criterion includes a proximate magnitude velocity criterion that provides a match between the velocity magnitude indicator for a reference V sample region SRk and a comparative component of a LIDAR data solution for a subject RV sample region. For instance, the possible LIDAR data solutions for the reference V sample regions have a component that serves as a comparative component that is an equivalent of the radial velocity indicator and can be compared to the velocity magnitude indicator. For instance, when the velocity magnitude indicator is the magnitude of a radial velocity calculation, each of the possible LIDAR data solutions for the reference V sample includes a radial velocity (VK) value that can serve as the comparative component. As another example, when the velocity magnitude indicator is the magnitude of a Doppler frequency shift, each of the possible LIDAR data solutions for the reference V sample includes an fd value that can serve as the comparative component. The LIDAR data solution that is for the subject RV sample region and that has the comparative component with the magnitude that is closest to the velocity magnitude indicator of the reference V sample regions can be selected as the correct LIDAR data solution for the subject RV sample region. Accordingly, the radial velocity Vk and distance Rk associated with the selected LIDAR data solution can serve as the LIDAR data for the subject RV sample region. Additionally, the direction of the comparative component in the selected LIDAR data solution can serve as the direction of the radial velocity indicator for the reference V sample region associated with the radial velocity indicator. As a result, the direction of the comparative component in the selected LIDAR data solution is assigned to the radial velocity indicator to provide a directional radial velocity indicator. Accordingly, the directional radial velocity indicator has the magnitude of the velocity magnitude indicator and the direction (positive or negative) of the comparative component in the selected LIDAR data solution. As an example where the radial velocity indicator is a radial velocity (Vk) calculation that provides a velocity magnitude indicator of 20 mph and the comparative component in the selected LIDAR data solution has a value of −20 mph, the negative value of the comparative component is assigned to the velocity magnitude indicator to provide a directional radial velocity indicator value of −20 mph.


The LIDAR data generator 281 can apply other solution identification criteria in addition or as an alternative to the proximate velocity magnitude criteria. Another example of an identification criterion can be a range criterion where the comparative component in each of the LIDAR data solutions for the subject sample region has a value within a range of comparative component values. In some instances, ranges for the magnitude of the comparative component values include, but are not limited to, the value of the velocity magnitude indicator of the reference V sample region +/−10%, 20%, or 30% of the velocity magnitude indicator of the reference V sample. In some instances, ranges for the magnitude of the comparative component values include, but are not limited to, the value of the velocity magnitude indicator of the reference V sample region +/−a constant value. In the event that the magnitude of the comparative component value for a subject RV sample region falls outside of the range, the LIDAR data solution that includes the comparative component value can be removed from the list of possible LIDAR data solutions. In the event that none of the possible LIDAR data solutions for a subject RV sample falls within the range, the LIDAR data for the subject RV sample region can be classified as unavailable. In some instances, the LIDAR data generator 281 applies both a range criterion and a proximate velocity magnitude criterion to a subject RV sample region.



FIG. 13E illustrates an example of process flow that the LIDAR data generator 281 can use to identify a correct LIDAR data solution for a subject RV sample region and to determine the correct radial velocity for a reference V sample region. At process block 346 the LIDAR data generator 281 identifies a subject RV sample region. The LIDAR data generator 281 also identifies one or more reference V sample regions. In some instances, the V sample region that is physically closest to the subject RV sample region is identified as the only reference V sample region.


At process block 347, the LIDAR data generator 281 can access the possible LIDAR data solutions that a preliminary LIDAR data generator calculated for a subject RV sample region as disclosed in the context of FIG. 12A. For instance, the LIDAR data generator 281 electronics can access a possible value for fr and/or possible value of the range (Rk) for the first solution, the second solution, and the third solution. At process block 348, the LIDAR data generator 281 can identify candidate LIDAR data solutions from among the possible LIDAR data solutions. For instance, the LIDAR data generator 281 can identify candidate fr values from among the possible fr values and/or candidate Rk values from among the possible Rk values. The value of Rk is positive by definition. As a result, the value of fr is also positive. However, the value of fr from the second solution is the negative of the fr value from the third solution. As a result, the possible fr and/or Rk values will include one or more negative values. The possible fr values and/or the Rk values with negative values can be removed from the pool of candidates. As a result, each of the possible fr values with a positive value can serve as a candidate fr value. Additionally or alternately, each of the possible Rk values with a positive value can serve as a candidate Rk value.


At process block 349, the LIDAR data generator 281 can identify the correct solution. The radial velocity (Vk) component and/or the fd component of each LIDAR data solution can serve as the comparative component for the LIDAR data solution. The component that serves as the comparative component can be determined by the velocity magnitude indicator. For instance, as noted above, when the velocity magnitude indicator is the magnitude of a radial velocity calculation, each of the possible LIDAR data solutions for the reference V sample includes a radial velocity (VK) value that can serve as the comparative component but when the velocity magnitude indicator is the magnitude of a Doppler frequency shift each of the possible LIDAR data solutions for the reference V sample includes an fd value that can serve as the comparative component. When the radial velocity (VK) value serve as the comparative component and a possible LIDAR data solution does not already have a radial velocity (VK) component, a radial velocity (Vk) can be calculated for each of the possible LIDAR data solutions using the fd value associated with the LIDAR data solution and fd=2Vkfc/c.


The LIDAR data generator 281 can use the one or more solution identification criteria to compare the comparative component from the different LIDAR data solutions to the velocity magnitude indicator from one or more V sample regions so as to identify the correct LIDAR data solution. The LIDAR data generator 281 can select the LIDAR data solution that includes the identified comparative component as the correct LIDAR data solution.


At process block 350, the LIDAR data generator 281 identifies the LIDAR data for the subject RV sample region associated with sample region index k. For instance, when the components of the selected possible LIDAR data solution do not already include a range (RK), the range (RK) can be calculated for the selected LIDAR data solution using the fr value associated with the selected LIDAR data solution and fr=2*αu*Rk/c. Additionally, when the components of the selected possible LIDAR data solution do not already include a radial velocity (VK), the radial velocity (VK) can be calculated for the selected LIDAR data solution using the fa value associated with the selected LIDAR data solution and fd=2Vkfc/c. The values of Vk and Rk associated with the selected LIDAR data solution can serve as shared LIDAR data for the subject RV sample region SRk. Accordingly, the LIDAR data for the RV sample region associated with sample region index k can include or consist of the values of Vk and Rk selected in process block 322.


At process block 351, the LIDAR data generator 281 can determine the radial velocity for the reference V sample regions. The direction of the radial velocity indicator can be identified. For instance, the direction of the comparative component in the selected LIDAR data solution can serve as the direction of the radial velocity indicator for the reference V sample region associated with the radial velocity indicator. As a result, the direction of the comparative component in the selected LIDAR data solution is assigned to the radial velocity indicator to provide a directional radial velocity indicator. Accordingly, the directional radial velocity indicator has the magnitude of the velocity magnitude indicator and the direction (positive or negative) of the comparative component in the selected LIDAR data solution. When the velocity magnitude indicator is the magnitude of a radial velocity calculation, the directional radial velocity indicator can serve as the radial velocity (Vk) for sample regions SRk. When the velocity magnitude indicator is the magnitude of something other than a radial velocity calculation, the radial velocity can be calculated from the directional radial velocity indicator. For instance, when the velocity magnitude indicator is the magnitude of either of the beat frequencies (fd), the radial velocity (Vk) can be calculated from Cfd=2Vkfc/c where Cfd represents the directional radial velocity indicator.



FIG. 13F illustrates a process flow for generating LIDAR data for the sample regions illuminated by a LIDAR system having real data signal cores constructed as disclosed in the context of FIG. 13B. At process block 352, the electronics cause the field of view for the LIDAR system to be scanned by the system output signals of the different cores.


At process block 353, the preliminary LIDAR data generators in the one or more velocity cores calculate the velocity magnitude indicator for each of the V sample regions. At process block 354, the preliminary LIDAR data generators in the one or more range and velocity cores calculate the possible LIDAR data solutions for each of the RV sample regions illuminated by one of the range and velocity cores. For instance, the preliminary LIDAR data generators 269 can calculate the possible LIDAR data solutions for each of the RV sample regions illuminated by one of the range and velocity cores.


At process block 355, the LIDAR data generator 281 identifies a subject RV sample region. The LIDAR data generator 281 also identifies one or more reference V sample region. In some instances, the V sample region that is physically closest to the subject RV sample region is identified as the only reference V sample region.


At process block 356, the LIDAR data generator 281 identifies the correct LIDAR data solution for the subject RV sample region. For instance, the LIDAR data generator 281 can select the correct LIDAR data solution for an RV sample region from among the possible LIDAR data solutions as disclosed on the context of FIG. 13E. The radial velocity (Vk) and the distance (Rk) associated with the selected LIDAR data solution can be considered shared LIDAR data that serves as all or a portion of the LIDAR data for the subject RV sample region SRk.


At process block 358, the LIDAR data generator 281 can determine the radial velocity (Vk) for each of the V reference sample regions. For instance, the common electronics 280 can determine the radial velocity (Vk) for each of the V reference sample regions as disclosed on the context of FIG. 13E.


Process block 355 through process block 358 can be repeated until each of the RV sample regions has served as a subject RV sample region or until the desired portion of the RV sample regions has served as a subject RV sample region and until each of the V sample regions has served as a reference V sample region or until the desired portion of the V sample regions has served as a reference V sample region. FIG. 13C illustrates a 1:1 ratio of V sample regions to RV sample regions. In this instance, each subject RV sample region can be associated with a different V sample region that serves as the reference V sample region for the subject RV sample region. As a result, the flow illustrated in FIG. 13D can be completed for a field of view with each of the RV sample regions serving as a subject sample region and each of the V sample regions serving as a reference sample region for an associated one of the RV sample regions. In some instances, multiple V sample regions can serve as a reference sample region for an associated one of the RV sample regions. In these instances, the V sample regions that serve as a reference sample region for an associated one of the RV sample regions can include the V sample region that is located closest to or least distant from the associated subject RV sample region.



FIG. 12C and FIG. 13C each illustrate the sample regions illuminated by different cores as being spatially separated, however, adjacent sample regions can fully or partially overlap. As an example, FIG. 13G illustrates a portion of the field of view from FIG. 12C or FIG. 13C where the sample regions illuminated by different cores partially overlap such that each of the V sample regions is overlapped by a different one of the RV sample regions. In some instances, the V sample region that is located closest to the subject RV sample region is the V sample region that most overlaps the subject RV sample region. Accordingly, the V sample regions that serve as a reference sample region for an associated subject RV sample region can include the V sample region that is most overlapped by the associated subject RV sample region.


The LIDAR data for the RV sample regions can serve as the range and radial velocity for each of the RV sample regions scanned by the RV cores in process block 300 of FIG. 10. Accordingly, the preliminary LIDAR data generators 269 or the LIDAR data generators 281 can calculate the radial velocity and coordinates of the field locations associated with the RV sample regions scanned by those cores as disclosed in the context of process block 300. Additionally, the radial velocities determined for the V sample regions can serve as the radial velocity for each of the V sample regions scanned by the V cores in process block 300. As a result, in some instances, the remainder of the process illustrated in FIG. 10 is executed so as to interpolate and/or extrapolate the range (Rk) for each of the V sample regions illuminated by the velocity core. As an example, at process block 330, the common electronics 280 can combine the range data (Rk) in the selected LIDAR data solutions from different RV sample regions so as to approximate the range data (Rk) for each of the V sample regions illuminated by one of the velocity cores. For instance, the common electronics 280 can interpolate and/or extrapolate the range data (Rk) for each of the V sample regions from the coordinates and/or range data (Rk) from different RV sample regions as disclosed above. The LIDAR data for the sample regions of the field of view can include or consist of the LIDAR data for the RV sample regions, the isolated LIDAR data for each of the V sample regions (Vk) and the approximated range data (Rk) for each of the V sample regions.


Example 1

A LIDAR system has a two-dimensional field of view. The field location fl1 is located in RV sample region SR1 and is identified as a known data point with coordinates R1=10 m and an angular orientation of θ1=10°. The field location fl10 is located in RV sample region SR10 and is identified as a known data point with coordinates R10=16 m and an angular orientation of θ10=12°. The field location fl15 is selected as a subject field location and is located in V sample region SR15 with an angular orientation of θ15=11°. The range R15 for field location fl15 is interpolated from the known data points fl1 and fl10 to be R15=13 m using a linear interpolation where R15=R1+(R10−R1)(θ15−θ1)(θ10−θ1).


Suitable platforms for the LIDAR chips include, but are not limited to, silica, indium phosphide, and silicon-on-insulator wafers. FIG. 14 is a cross-section of portion of a chip constructed from a silicon-on-insulator wafer. A silicon-on-insulator (SOI) wafer includes a buried layer 431 between a substrate 432 and a light-transmitting medium 434. In a silicon-on-insulator wafer, the buried layer 431 is silica while the substrate 432 and the light-transmitting medium 434 are silicon. The substrate 432 of an optical platform such as an SOI wafer can serve as the base for the entire LIDAR chip. For instance, the optical components shown on the LIDAR chips of FIG. 1A through FIG. 1C can be positioned on or over the top and/or lateral sides of the substrate 432.


The dimensions of the ridge waveguide are labeled in FIG. 14. For instance, the ridge has a width labeled w and a height labeled h. A thickness of the slab regions is labeled T. For LIDAR applications, these dimensions can be more important than other dimensions because of the need to use higher levels of optical power than are used in other applications. The ridge width (labeled w) is greater than 1 μm and less than 4 μm, the ridge height (labeled h) is greater than 1 μm and less than 4 μm, the slab region thickness is greater than 0.5 μm and less than 3 μm. These dimensions can apply to straight or substantially straight portions of the waveguide, curved portions of the waveguide and tapered portions of the waveguide(s). Accordingly, these portions of the waveguide will be single mode. However, in some instances, these dimensions apply to straight or substantially straight portions of a waveguide. Additionally or alternately, curved portions of a waveguide can have a reduced slab thickness in order to reduce optical loss in the curved portions of the waveguide. For instance, a curved portion of a waveguide can have a ridge that extends away from a slab region with a thickness greater than or equal to 0.0 μm and less than 0.5 μm. While the above dimensions will generally provide the straight or substantially straight portions of a waveguide with a single-mode construction, they can result in the tapered section(s) and/or curved section(s) that are multimode. Coupling between the multi-mode geometry to the single mode geometry can be done using tapers that do not substantially excite the higher order modes. Accordingly, the waveguides can be constructed such that the signals carried in the waveguides are carried in a single mode even when carried in waveguide sections having multi-mode dimensions. The waveguide construction disclosed in the context of FIG. 14 is suitable for all or a portion of the waveguides on LIDAR chips constructed according to FIG. 1A through FIG. 1C.


As noted above, the electronics that operate the system include local electronics 32 and common electronics 280. The distinction between local electronics and common electronics is used to illustrate the portion of the electronics that are each associated with one of the cores (the local electronics) and the portion of the electronics associated with multiple cores (the common electronics). While the local electronics 32 and common electronics 104 are shown as being in different locations, the local electronics 32 and common electronics 104 can be in a common location and/or common packaging. Further, the local electronics 32 and common electronics 104 can be integrated and need not refer to discrete or distinct electronic components. As a result, functionality described as being performed by local electronics can be performed by common electronic and/or functionality described as being performed by common electronics can be performed by local electronic. Certain components of the electronics are disclosed above such as the preliminary LIDAR data generators and the LIDAR data generator; however, the electronics include additional components that are not illustrated. For instance, a portion of the local electronics and/or the common electronics can be configured to control the frequency of the systems output signal, steering of the system output signals, and operation of a frequency shifter 298


Suitable local electronics 32 and/or common electronics 104 can include, but are not limited to, a controller that includes or consists of analog electrical circuits, Application Specific Integrated Circuits (ASICSs), digital electrical circuits, processors, microprocessors, digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), computers, microcomputers, or combinations suitable for performing the operation, monitoring and control functions described above. In some instances, the controller has access to a memory that includes instructions to be executed by the controller during performance of the operation, control and monitoring functions. Although the electronics are illustrated as a single component in a single location, the electronics can include multiple different components that are independent of one another and/or placed in different locations. Additionally, as noted above, all or a portion of the disclosed electronics can be included on the chip including electronics that are integrated with the chip.


Light sensors that are interfaced with waveguides on a LIDAR chip can be a component that is separate from the chip and then attached to the chip. For instance, the light sensor can be a photodiode, or an avalanche photodiode. Examples of suitable light sensor components include, but are not limited to, InGaAs PIN photodiodes manufactured by Hamamatsu located in Hamamatsu City, Japan, or an InGaAs APD (Avalanche Photo Diode) manufactured by Hamamatsu located in Hamamatsu City, Japan. These light sensors can be centrally located on the LIDAR chip. Alternately, all or a portion the waveguides that terminate at a light sensor can terminate at a facet located at an edge of the chip and the light sensor can be attached to the edge of the chip over the facet such that the light sensor receives light that passes through the facet. The use of light sensors that are a separate component from the chip is suitable for all or a portion of the light sensors selected from the group consisting of the first auxiliary light sensor 218, the second auxiliary light sensor 220, the first light sensor 223, and the second light sensor 224.


As an alternative to a light sensor that is a separate component, all or a portion of the light sensors can be integrated with the chip. For instance, examples of light sensors that are interfaced with ridge waveguides on a chip constructed from a silicon-on-insulator wafer can be found in Optics Express Vol. 15, No. 21, 13965-13971 (2007); U.S. Pat. No. 8,093,080, issued on Jan. 10 2012; U.S. Pat. No. 8,242,432, issued Aug. 14 2012; and U.S. Pat. No. 6,108,472, issued on Aug. 22, 2000 each of which is incorporated herein in its entirety. The use of light sensors that are integrated with the chip are suitable for all or a portion of the light sensors selected from the group consisting of the auxiliary light sensor 218, the second auxiliary light sensor 220, the first light sensor 223, and the second light sensor 224.


The light source 4 that is interfaced with the utility waveguide 12 can be a laser chip that is separate from the LIDAR chip and then attached to the LIDAR chip. For instance, the light source 4 can be a laser chip that is attached to the chip using a flip-chip arrangement. Use of flip-chip arrangements is suitable when the light source 4 is to be interfaced with a ridge waveguide on a chip constructed from silicon-on-insulator wafer. Alternately, the utility waveguide 12 can include an optical grating (not shown) such as Bragg grating that acts as a reflector for an external cavity laser. In these instances, the light source 4 can include a gain element that is separate from the LIDAR chip and then attached to the LIDAR chip in a flip-chip arrangement. Examples of suitable interfaces between flip-chip gain elements and ridge waveguides on chips constructed from silicon-on-insulator wafer can be found in U.S. Pat. No. 9,705,278, issued on Jul. 11, 2017 and in U.S. Pat. No. 5,991,484 issued on Nov. 23, 1999; each of which is incorporated herein in its entirety. When the light source 4 includes a gain element or laser chip, the local electronics 32 can tune the frequency of the outgoing LIDAR signal by changing the level of electrical current applied to through the gain element or laser cavity.


The above LIDAR systems include multiple optical components such as a LIDAR chip, LIDAR adapters, light source, light sensors, waveguides, and amplifiers. In some instances, the LIDAR systems include one or more passive optical components in addition to the illustrated optical components or as an alternative to the illustrated optical components. The passive optical components can be solid-state components that exclude moving parts. Suitable passive optical components include, but are not limited to, lenses, mirrors, optical gratings, reflecting surfaces, splitters, demulitplexers, multiplexers, polarizers, polarization splitters, and polarization rotators. In some instances, the LIDAR systems include one or more active optical components in addition to the illustrated optical components or as an alternative to the illustrated optical components. Suitable active optical components include, but are not limited to, optical switches, phase tuners, attenuators, steerable mirrors, steerable lenses, tunable demulitplexers, tunable multiplexers.


Other embodiments, combinations and modifications of this invention will occur readily to those of ordinary skill in the art in view of these teachings. Therefore, this invention is to be limited only by the following claims, which include all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawings.

Claims
  • 1. A LIDAR system, comprising: one or more cores that each outputs a system output signal that illuminates multiple sample regions in a field of view, a reference one of the cores including a light combiner configured to generate a composite signal beating at a beat frequency, andelectronics configured to use a real Fourier transform to determine a beat frequency of the composite signal, the electronics configured to use the beat frequency of the composite signal to calculate the magnitude of a radial velocity indicator for a reference one of the sample regions illuminated by the system output signal output from the reference core, the radial velocity indicator indicating a radial velocity between the LIDAR system and an object in the reference sample region,the electronics configured to identify a direction of the radial velocity indicator, the identification of the direction including a comparison of the magnitude of the radial velocity indicator to data calculated for a subject sample region selected from among the sample regions, the reference sample region being different from the subject sample region.
  • 2. The system of claim 1, wherein the radial velocity indicator is a calculation of the radial velocity for the reference sample region.
  • 3. The system of claim 1, wherein the radial velocity indicator is the beat frequency of the composite signal that results from illumination of the reference sample region by one of the system output signals.
  • 4. The system of claim 1, wherein the data from the subject sample regions includes multiple possible LIDAR data solutions for the subject sample region.
  • 5. The system of claim 4, wherein each of the possible LIDAR data solutions for the subject sample region includes all or a portion of the components selected from the group consisting of an fr value, an fd value, a radial velocity value and a range value, the fd value being a Doppler frequency shift,the fr value being a frequency shift that results from a distance between the system and an object in the subject sample region,the radial velocity value indicating a radial velocity between the system and the object in the subject sample region, andthe range value indicating a range between the system and the object in the subject sample region.
  • 6. The system of claim 4, wherein each of the possible LIDAR data solutions includes a comparative component selected from an fd value and a radial velocity value, the fd value being a Doppler frequency shift and the radial velocity value indicating a radial velocity between the system and an object in the subject sample region; andthe comparison of the magnitude of the radial velocity indicator to data calculated for one or more reference sample regions includes comparing the magnitude of the radial velocity indicator to the comparative component.
  • 7. The system of claim 6, wherein the electronics identify the possible LIDAR solution with the comparative component that has a magnitude closest to the magnitude of the magnitude of the radial velocity indicator.
  • 8. The system of claim 7, wherein the electronics set the direction of the radial velocity indicator equal to a direction of the identified comparative component.
  • 9. The system of claim 1, wherein the reference sample region and the subject sample region at least partially overlap.
  • 10. The system of claim 1, wherein the reference sample region is the sample region that is closest to the subject sample region.
  • 11. The system of claim 10, wherein the one or more cores is multiple cores and the system output signal that illuminated the subject sample region is different from the system output signal that illuminated the sample region that is closest to the subject sample region.
  • 12. The system of claim 1, wherein the electronics estimate a range for the subject sample region by interpolating between ranges calculated for multiple different sample regions selected from among the sample regions illuminated by system output signals from the one or more cores.
  • 13. The system of claim 1, wherein the electronics use a real Fourier transform to calculate a value of the beat frequency.
  • 14. A LIDAR system, comprising: one or more cores that each outputs a system output signal that illuminates multiple sample regions in a field of view, a subject one of the cores including a light combiner configured to generate a composite signal beating at a beat frequency, andelectronics configured to use a value of the beat frequency of the composite signal to calculate multiple possible LIDAR data solutions for a subject one of the sample regions illuminated by the system output signal output from the subject core, each of the possible LIDAR data solutions including a comparative component that indicates a value of a radial velocity between the LIDAR system and an object in the subject sample region,the electronics configured to identify a correct one of the LIDAR data solutions, the identification of the correct LIDAR data solution including a comparison of the LIDAR data solutions to data calculated for one or more reference sample regions selected from among the sample regions, the one or more reference sample regions being different from the subject sample region.
  • 15. The system of claim 14, wherein the comparison component is a calculation of a possible radial velocity for the subject sample region.
  • 16. The system of claim 14, wherein the comparison component is the beat frequency of the composite signal that results from illumination of the subject sample region by one of the system output signals.
  • 17. The system of claim 14, wherein the data from each one of the one or more reference sample regions includes a radial velocity indicator for the reference sample region, the radial velocity indicator indicating a radial velocity between the LIDAR system and an object in the reference sample region.
  • 18. The system of claim 17, wherein each of the possible LIDAR data solutions includes the comparative component selected from an fd value and a radial velocity value, the fd value being a Doppler frequency shift and the radial velocity value indicating a radial velocity between the system and an object in the reference sample regions; andthe comparison of the comparison of the comparison component to data calculated for one or more reference sample regions includes comparing a magnitude of the radial velocity indicator to a magnitude of the comparative component.
  • 19. The system of claim 18, wherein the electronics identify the possible LIDAR solution with the comparative component that has a magnitude closest to the magnitude of the radial velocity indicator as the correct LIDAR solution.
  • 20. The system of claim 14, wherein the one or more reference sample regions is a single sample region.
  • 21. The system of claim 14, wherein the one or more reference sample regions and the subject sample region at least partially overlap.
  • 22. The system of claim 14, wherein the one or more reference sample regions include a one of the sample regions that is closest to the subject sample region.
  • 23. The system of claim 22, wherein the one or more cores is multiple cores and the system output signal that illuminated the subject sample region is different from the system output signal that illuminated the sample region that is closest to the subject sample region.
  • 24. The system of claim 14, wherein the electronics estimate a range for at least one of the one or more reference sample regions by interpolating between ranges calculated for multiple different sample regions selected from among the sample regions illuminated by system output signals from the one or more cores.
  • 25. The system of claim 14, wherein the electronics use a real Fourier transform to calculate a value of the beat frequency.
  • 26. A method of operating a LIDAR system, comprising: illuminating multiple sample regions in a field of view with system output signals output from different cores;combining light signals so as to generate a composite signal beating at a beat frequency;using the value of the beat frequency of the composite signal to calculate a magnitude of a radial velocity indicator for a reference one of the sample regions illuminated by the system output signal output from the reference core, the radial velocity indicator indicating a radial velocity between the LIDAR system and an object in the reference sample region; andidentifying a direction of the radial velocity indicator by comparing the magnitude of the radial velocity indicator to data calculated for a subject one of the sample regions, the reference sample region being different from the subject sample region.
  • 27. A method of operating a LIDAR system, comprising: illuminating multiple sample regions in a field of view with system output signals output from different cores;combining light signals so as to generate a composite signal beating at a beat frequency;using the value of the beat frequency to calculate multiple different possible LIDAR data solutions for a subject one of the sample regions, each of the possible LIDAR data solutions includes a comparative component that indicates a value of a radial velocity between the LIDAR system and an object in the subject sample region; andidentifying a correct one of the LIDAR data solutions by comparing the LIDAR data solutions to data calculated for one or more reference sample regions selected from among the sample regions, the one or more reference sample regions being different from the subject sample region.