This disclosure relates generally to LiDAR systems and methods of operation and, in particular, to a method for operating a LiDAR system across a wide-angle field-of-view.
LiDAR systems can be used in various applications, such as in vehicles, portable computer devices (e.g., smartphones, laptops, tablets) and augmented/virtual reality devices/systems, in order to image a field of view and locate objects within the field of view. A LiDAR system directs light outward over a range of angles and receives reflections of the light from objects. Many current LiDAR systems use a mechanical-scanning device, such as a gimbal or spinning disks or polygons in order to disperse outgoing light beams. However, such mechanical-scanning devices often come with resolution issues, maintenance issues, assembly issues and/or temperature dependence issues.
For these and other reasons, there is a need to improve manufacturability, performance and use of LiDAR systems in aspects such as range, resolution, field-of-view, and physical and environmental robustness.
A detailed description of embodiments is provided below, by way of example only, with reference to drawings accompanying this description, in which:
It is to be expressly understood that the description and drawings are only for purposes of illustrating certain embodiments and are an aid for understanding. They are not intended to be and should not be limiting.
LiDAR Systems
Radiation with wavelength in the optical region of the electromagnetic spectrum i.e., from the ultraviolet up to the infrared, can interact with matter in various states through mechanisms such as optical absorption and scattering. Early after the advent of the first lasers, it was recognized that these novel sources of coherent optical radiation could be used for sensing solid objects, particulate matter, aerosols, and even molecular species located at long distances. Remote sensing applications emerged owing to some distinctive features of laser sources. For example, several types of laser sources emit optical pulses carrying high energy that can propagate in the atmosphere in the form of a slowly-diverging optical beam. Similarly to the radio and microwave radiation sources used in common radar instruments, systems that employ light sources for remote sensing applications are generally known as LiDAR systems, or simply LiDARs, which is the acronym for Light Detection And Ranging.
LiDAR works much like radar, emitting optical light pulses (e.g., infrared light pulses) of short duration, typically in the ns (nanosecond, 1 ns=10−9 s) range, either in single-shot regime or in the form of a pulse train of limited duration, instead of radio waves and measuring how long they take to come back after hitting nearby objects. This is shown conceptually in
where c is the speed of light in vacuum, which scales to roughly 3×108 m/s, and n denotes the refractive index of the medium in which the optical pulse propagates. Methods for optical ranging are not limited to the pulsed TOF technique. Methods such as optical triangulation, interferometric phase-shift range finding, and frequency-modulated continuous-wave (FMCW) range finding, just to name of few, exist as well. The review paper of M.-C. Amann et al. (“Laser ranging: a critical review of usual techniques for distance measurement”, Optical Engineering vol. 40, pp. 10-19, January 2001) discusses these techniques in greater details.
LiDAR systems may be capable of capturing millions of such precise distance measurement points each second, from which a 3D matrix of its environment can be produced. Information on objects' position, shape, and behavior can be obtained from this comprehensive mapping of the environment, as shown in the example mapping shown in
General Overview of a LiDAR System
The various embodiments of the present disclosure described below are intended for implementation in LiDAR system with non-uniform magnification optics. Some of the basic elements of a LiDAR system 10 may be better appreciated by referring to the schematic block diagram depicted in
Optical Emitter Module
Upon reception of a trigger signal from the control and processing unit 20, the driver electronics 24 may generate an electrical current pulse whose duration lies in the ns range. The current pulse is then routed to the light source 26 for emission of an optical pulse. The light source 26 is generally a laser, but other types of optical sources, such as light-emitting diodes (LEDs), can be envisioned without departing from the scope of the present disclosure. The use of semiconductor laser diode assemblies now prevails in LiDAR systems. The laser diode assembly may comprise a single-emitter laser diode, a multiple-emitter laser diode, or even a two-dimensional stacked array of multiple-emitter laser diodes. The specific type of light source integrated in a LiDAR system 10 depends, inter alia, on factors such as the peak optical output power required for successful ranging at the desired maximum range, the emission wavelength, and the device cost. Light sources such as fiber lasers, microchip lasers and even solid-state lasers find their way in LiDAR applications, particularly when no laser diode source exists at the desired emission wavelength. The optical pulses pass through the emitter optics 28 before leaving the optical emitter module 12. The emitter optics 28 shapes the optical pulses in the form of a beam having the desired propagation characteristics. The primary optical beam characteristics may be the beam divergence, the transverse size of the beam irradiance profile at the exit aperture of the emitter module 12 (e.g., for eye safety concerns), and the spatial beam quality. The emitter 28 and receiver optics 18 are generally boresighted so as the optical beam path and the field of view of the receiver module 14 overlap over a predetermined range interval.
Optical Receiver Module
The return optical signals collected by the receiver optics 18 may pass through a narrowband optical filter 30 for removal of the parasitic background light before impinging on the sensitive surface of a photodetector 32. The photodetector 32 is generally an avalanche or PIN photodiode, or a 1D or 2D array of such photodiodes, with material composition suited to the wavelength of the optical pulses. The current from the photodetector 32 may then fed to a transimpedance (current to voltage) amplifier 34. Also, the signal may or may not be pre-amplified as an APD typically has an internal current multiplication gain which may be sufficient.
The amplifier circuit may comprise a matched filter to limit the electrical bandwidth of the optical receiver module 14. The control and processing unit 20 may control the amplifier gain to ensure that the signal amplitude fits within the input voltage dynamic range of the A/D converter 36. It is known in the art that other amplifier configurations could be used as well, such as a logarithmic amplifier or a set of amplifiers mounted in parallel, each amplifier having a fixed gain. The A/D converter 36 digitizes the input voltage signals at a sampling rate of typically several tens of MS/s (mega-samples per second) to a few thousands of MS/s. The time period between two consecutive digital sampling operations defines the extent of the so-called range bins of the system 10, when expressed in units of distance.
In many cases the output of the LiDAR system may be used by autonomous computer-based processes, e.g., to make navigation or mobility decisions in autonomous vehicle applications. In some cases, a user may operate the system 10 and receive data from it through the user interface hardware 38. For instance, the measured range to the targeted object 16 and/or a more detailed 3D map of the field of view may be displayed in digital form on a liquid-crystal or plasma visual display 40. In augmented reality applications, the detailed 3D map data may be combined with high-definition image data, e.g., from a high-definition digital camera (not shown), in order to allow virtual objects/elements to be placed in a virtual environment displayed on the display 40.
Vehicles of all types now use LiDAR to determine which obstacles are nearby and how far away they are. The 3D maps provided by LiDAR components not only detect and position objects but also identify what they are. Insights uncovered by LiDAR also help a vehicle's computer system to predict how objects will behave, and adjust the vehicle's driving accordingly.
Semi- and fully-autonomous vehicles may use a combination of sensor technologies. This sensor suite could include Radar, which provides constant distance and velocity measurements as well as superior all-weather performance, but lacks in resolution, and struggles with the mapping of finer details at longer ranges. Camera vision, also commonly used in automotive and mobility applications, provides high-resolution information in 2D. However, there is a strong dependency on powerful Artificial Intelligence and corresponding software to translate captured data into 3D interpretations. Environmental and lighting conditions may significantly impact camera vision technology.
LiDAR, in contrast, offers precise 3D measurement data over short to long ranges, even in challenging weather and lighting conditions. This technology can be combined with other sensory data to provide a more reliable representation of both static and moving objects in the vehicle's environment.
Hence, LiDAR technology has become a highly accessible solution to enable obstacle detection, avoidance, and safe navigation through various environments in a variety of vehicles. Today, LiDARs are used in many critical automotive and mobility applications, including advanced driver assistance systems and autonomous driving.
In many autonomous driving implementations, the main navigation system interfaces with one or a few LiDAR sensors. It is desirable that the LiDAR sensor(s) offer high ranges and high resolutions in order to support functions such as localization, mapping and collision avoidance. In terms of localization, the first step of environment perception for autonomous vehicles is often to estimate the trajectories of the vehicle. Since Global Navigation Satellite System (GNSS) are generally inaccurate and not available in all situations, the Simultaneous Localization and Mapping (SLAM) technique is used to solve that problem. In terms of collision avoidance, a long detection range at cruising speed potentially provides sufficient time to react softly in case of an obstacle detection. For example, for standing users inside a shuttle, a safe and comfortable deceleration of 1.5 m/s2 may be desirable. As an example, at 40 km/h, and at 1.5 m/s2 deceleration, a distance of 47 m is needed to stop the shuttle, assuming a 0.5 s reaction time.
Many autonomous shuttles today rely on a long-range mechanical-scanning LiDAR sensor that is placed on top of the shuttle.
Therefore, it would be desirable to provide LiDAR systems with solid state scanning devices that avoid or at least mitigate one or more of these issues.
In terms of range and resolution, it is generally desirable to provide detectability at greater range and sufficient resolution to be able to accurately categorize detected objects.
As another aspect of collision avoidance, a LiDAR system with a side-looking field of view (FoV) can potentially be useful for turning assistance, particularly on larger vehicles, such as trucks or buses. For example,
Referring to
For example,
In order to cover the same vertical FoV, i.e., substantially 90° from the horizon to the ground 90, while having relatively higher vertical resolution in certain parts of the vertical FoV and relatively lower vertical resolutions in other parts of the vertical FoV, the inventors of the present disclosure have conceived of utilizing a non-uniform vertical angular distribution of scanning beams, thereby providing non-uniform vertical resolution.
For example,
It should be noted that is merely one example of a non-linear function that may be used to generate a non-uniform angular distribution. Moreover, a person of ordinary skill in the art will recognize that the choice of the distribution and the number of points over a given angular range may vary depending on performance requirements, such as the minimum required vertical resolution, the minimum number of points on a target of a given size at a given distance, etc.
A segmented FoV with uniform horizontal resolution and non-uniform vertical resolution can potentially be realized in many ways. For example, non-uniform magnification optics may be used either alone or in combination with a beam steering device in order to achieve a FoV with such properties.
In some embodiments, non-uniform magnification optics, such as the non-uniform magnification optics 1302 shown in
For example, returning to the segmented FoV 1100 shown in
LCPGs, with nearly ideal diffraction efficiencies (>99.5%) have been experimentally demonstrated over a wide range of grating periods, wavelengths (visible to near-IR), and areas. Each polarization grating stage can double the maximum steered angle in one dimension without major efficiency reductions, so very large steered angles are possible (at least to ±40° field of regard). The structure at the heart of these devices is a polarization grating (PG), implemented using nematic liquid crystals. The nematic director is a continuous, in-plane, bend-splay pattern established using a UV polarization hologram exposing photo-alignment materials. When voltage is applied, the director orients out of plane, effectively erasing the grating. A single LCPG stage can be considered the key component with three possible directions (±6 and 0°), but additional steering angles are possible by stacking LCPG stages.
In another example of implementation, the beam steering device includes one or more LCPG stages, where each stage includes an LC switch and a passive grating. This configuration allows two possible steering directions.
It should be noted that an LCPG is merely one example of a non-mechanical beam steering device that may be used in some embodiments of the present disclosure. Other non-limiting examples of beam steering devices, such an optical phased arrays (OPAs) or microelectromechanical systems (MEMS) that may be utilized in some embodiments of the present disclosure are described, for example, in Paul F. McManamon, Abtin Ataei,
“Progress and opportunities in optical beam steering,” Proc. SPIE 10926, Quantum Sensing and Nano Electronics and Photonics XVI, 1092610 (29 May 2019), which is incorporated herein by reference in its entirety.
However, the emission and reception efficiencies of the LCPG 1400 are not constant with steering angle.
Since emission and reception efficiencies drop off at higher horizontal steering angles, in the following example only the center 8×4 tiles of the LCPG 1400 are utilized for horizontal and vertical steering. In other implementations, more or fewer horizontal tiles may be used for horizontal steering to provide a wider or narrower horizontal steering range. It is also noted that, since not all tiles of the LCPG 1400 are utilized in the current embodiment, in other embodiments an LCPG with fewer horizontal steering stacks may be utilized, which could potentially reduce cost and provide a gain in efficiency, and therefore in range.
The LiDAR system 1800 has the wide-angle magnification optics 1802, a protective cover 1804 that may not be present in some embodiments, a beam steering device 1806, which in this embodiment is implemented by the 8×4 tiles of the LCPG 1400 of
In the LiDAR system 1800 shown in
In the example shown in
In the example LiDAR system 1800 shown in
For example,
As another example,
In the examples discussed above with reference to the LiDAR system 1800 of
For example,
In some embodiments, two beam steering devices, such as an LCPG beam steering device and a MEMS beam steering device, may be used in conjunction with one another to provide coarse and fine scanning functions. For example, a MEMS beam steering device may be used for fine scanning with a coarse scanning segment of an LCPG beam scanning device.
As one example,
In some embodiments, the magnification optics 2802 comprises an objective lens 2803, wherein the sensor unit 2814 comprises a plurality of sensor elements placed in an image plane of the objective lens 2803. For example, the sensor unit 2814 may include an array of APDs as described earlier with reference to
In some embodiments, if the magnification optics 2802 has an image point distribution function that is non-linear relative to a vertical field angle of object points in the FoV and the depth map may have at least one substantially expanded zone and at least one substantially compressed zone in the vertical direction, then the objective lens and the plurality of sensor elements may be configured such that, in each substantially expanded zone, a number of sensor elements per degree of vertical field angle is greater than the average number of sensor elements per degree of vertical field angle over the total FoV in the vertical direction and, in each substantially compressed zone, the number of sensor elements per degree of vertical field angle is less than the average number of sensor elements per degree of vertical field angle over the total FoV in the vertical direction.
In the LiDAR system 2800 shown in
In some embodiments, the LiDAR system 2800 may include inner magnification optics between the emission module 2820 and the magnification optics 2802, such that the optical signal 2842 passes through two magnification optics before illuminating at least part of the FoV.
In some embodiments, the depth map is an original depth map, wherein the sensor unit or the computing device 2830 is configured for correcting the original depth map for the non-linear distribution function to produce a new depth map in which the substantially compressed zone in the original depth map is expanded in the new depth map and in which the substantially expanded zone in the original depth map is compressed in the new depth map.
In some embodiments, the new depth map comprises pixels and wherein at least some of the pixels in a portion of the new depth map corresponding to an expanded version of a substantially compressed zone in the original depth map are interpolated pixels.
In some embodiments, the sensor unit is configured for processing the depth map to determine a location of the object in the FoV and a distance to the object in the FoV.
In some embodiments, the LiDAR system 2800 further includes a beam steering unit 2806 for orienting the optical signal towards the FoV in a selected one of a plurality of directions. For example, the beam steering unit 2806 may be part of the emission unit 2820 as shown in
In some embodiments, each of the steering directions is associated with a respective sub-area of the FoV.
In some embodiments, the beam steering unit 2806 is a solid-state beam steering unit. For example, the beam steering unit 2806 may comprise an LCPG.
In some embodiments, the beam steering unit comprises a multi-stage system. For example, one stage of the multi-stage system may comprise an LCPG.
In some embodiments, the magnification optics is configured for magnifying a range of angles illuminated by the emitted optical signal.
In some embodiments, the emission unit 2820 is configured for controllably emitting a selected one of a plurality of optical beams as the emitted optical signal 2840.
In some embodiments, each of the plurality of optical beams is oriented in a predetermined direction.
In some embodiments, the FoV comprises a vertical component and a horizontal component, wherein the FoV spans at least 60 degrees in the vertical direction between horizon and ground.
In some embodiments, the FoV spans at least 150 degrees in the horizontal direction.
In some embodiments, the image point distribution function is substantially linear relative to a horizontal field angle of object points in the FoV. In other embodiments, the image point distribution function of the magnification optics 2820 is non-linear relative to a horizontal field angle of object points in the FoV. For example, the image point distribution function may be symmetric relative to a horizontal field angle of object points in the FoV.
At step 2900 of the method a first image of a scene is captured via a first sensor.
At step 2902, a second image of the scene is captured via a second sensor different from the first sensor. The first and second images overlap to include at least one common FoV. In some embodiments, the first image comprises pixels that are distributed in accordance with a non-linear image point distribution function relative to a field angle of object points of the FOV. In some embodiments, one of the first and second images is a depth map.
At step 2904, the first image is corrected based on said non-linear distribution function to produce a third image.
AT step 2906, the second and third images are combined with each other to produce a composite image including information from the second image and information from the third image.
In some embodiments, the image point distribution function is non-linear in the vertical direction between horizon and ground.
In some embodiments, the image point distribution function has a maximum divergence of at least ±10% compared to a linear distribution function, in the vertical direction.
In some embodiments, the image point distribution function is substantially linear in the horizontal direction.
In some embodiments, the third image has more pixels than the first image.
In some embodiments, some of the pixels of the third image correspond directly to pixels of the first image and wherein other ones of the pixels of the third image correspond to interpolated versions of some of the pixels of the first image.
In some embodiments, the method may further include interpolating said some of the pixels of the first image to produce said other ones of the pixels in the third image.
In some embodiments, the other one of the first and second images is a 2D camera image.
In some embodiments, the first sensor comprises an array of photodiodes and wherein the second sensor comprises a digital camera.
In some embodiments, the second image comprises pixels that are distributed in accordance with a substantially linear distribution function relative to a field angle of object points of the FOV.
In some embodiments, the FOV comprises a vertical FOV and a horizontal FOV.
In some embodiments, the vertical FOV spans at least 60 degrees and the horizontal FOV spans at least 150 degrees.
In some embodiments, the image point distribution function being non-linear relative to a field angle of object points in at least the vertical FOV.
In some embodiments, the image point distribution function is non-linear relative to a field angle of object points in both the horizontal FOV and the vertical FOV.
In some embodiments, the composite image is an RGBD image.
In some embodiments, the first image comprises at least one substantially compressed zone and at least one substantially expanded zone, and wherein correcting the first image comprises at least one of (i) compressing the substantially expanded zone and (ii) expanding the substantially compressed zone, to produce the third image.
In some embodiments, capturing the second image of the scene at step 2902 is carried out by sequentially capturing different subportions of the FOV as illuminated by an optical signal emitted in a controllable direction.
The LiDAR system 3000 has a wide-angle magnification optics 3002 and a light sensor 3004. What is being shown in
The wide-angle magnification optics achieves a wide-angle field of view. By “wide-angle’ is meant an optical aperture of at least 150 degrees in some axis, for example a horizontal axis. Preferably the angular aperture is close to 180 degrees. This is advantageous in automotive applications where the LiDAR system enables autonomous driving or driving facilitation functions and 180 degrees of angular aperture would allow a wide enough view of the road. Note that in a number of applications of the LiDAR system, the angular aperture may be constant in all directions, such as in the horizontal direction or the vertical direction. In other applications, the angular aperture may vary, for instance it may be larger in the horizontal direction and narrower in the vertical direction. The later variant is useful in automotive applications where a wide horizontal view of the road is important, but a wide vertical view of the road is not as essential.
The light returns that reach the lens 3002 are projected on the light sensor 3004. The configuration of the lens 3002 is selected to adapt the light projection on the light sensor 3004 in order to provide advantages. Particularly, the lens 3002 is configured to project a representation of the scene conveyed by the light return by compressing a portion of that representation while expanding other portions. For example, a portion of the representation that may be expanded is one which is more susceptible to contain objects of interest, while a portion of the representation that may be compressed is one which is less susceptible to contain objects of interest. In automotive applications, where the LiDAR system 3000 has a view of the road, the central part of the field of view of the LiDAR system 3000 is where objects of interest are likely to reside, such as automobiles, pedestrians or obstacles. The peripheral part of the field of view is less likely to contain objects of interest. As a car drives on a road, most of the driving decisions are influenced by the what happens ahead, not on the side, hence it is important for the LiDAR system 3000 to have the best visibility in that area.
However, there may be other applications where it is more important to have a good peripheral vision than a central one. In such applications, the lens 3002 would be configured differently to manipulate the light return such as to expand the peripheral area of the light return and compress the central area of the light return.
The selective expansion and compression of the light return is accomplished by selecting the lens geometry to achieve the desired effect. This is illustrated with greater detail at
The lens 3002 has a central area 3006 and a peripheral area 3008. The central area 3006 receives a light return from an area S1 of the scene. The boundaries between S1 and S2 are conceptually shown as dotted lines 3010 and 3012. In three dimensions the lines 3010 and 3012 form a frustum of a cone.
The central area of the lens 3006 provides a higher magnification than the peripheral area 3008. The practical effect of this arrangement is to direct the light of the return signal in the cone defined by lines 3010 and 3012 over a larger surface area of the sensor 3004, than if the magnification would be the same across the lens 3002.
In LiDAR architectures using a flash optical illumination, where the light return is received at once by the lens 3002, the approximate object location in the scene is determined on the basis of the position of the one or more light sensing elements on the light sensor 3004 that respond to the light return. When the light sensing elements are APDs, the position of the APDs that output a signal indicating the presence of an object provides the approximate location of the object in the scene. Accordingly, by spreading the light information over a larger portion (the circle D2) of the light sensor 3004, a better resolution is obtained as more APDs are involved in the object sensing. Thus, it is possible to tell with a higher level of precision the location in the scene where the detected objects reside.
Objectively, light received over the peripheral area 3008 is focused on a smaller portion of the light sensor 3004, which means that fewer APDs are available for sensing. This implies that the detection has lower resolution, however, the peripheral area is less likely to contain objects of interest, hence the trade-off of increasing the resolution in the center at the expense of reducing the resolution at the periphery provides practical advantages overall.
In a different LiDAR architecture, which uses a steerable illumination beam, the variable magnification lens 3002 also provides advantages. In the steerable beam architecture, the light emission can be steered to scan the scene and thus direct the light toward a particular area of the scene. A steerable beam architecture uses a beam steering engine which can be based on solid state components, mechanical components or a combination of both. Examples of solid-state components include opto-electric plates that can change the angle of propagation of light by applying a voltage. Example of mechanical components include MEMS mirrors that can change the orientation of a light beam.
The concept of sensor fusion between a LiDAR and an image is to attribute distance measurements to individual pixels or pixel groups in the image. Hence, the 3D map can have a point cloud structure, where individual points are distributed in a space and each point has one or more other attributes such as color, transparency, etc. Since a LiDAR system operates typically at a lower resolution than an image system, it is also known to perform an upsampling operation when the LiDAR data is merged with the image data, where distance information is derived and attributed to pixels or pixels groups for which the LiDAR system does not have a direct measurement. A technique which has been proposed in the past is to rely of visual similarity in order to derive distance similarity. In other words, areas of the image which are visually similar to an area for which a distance measurement has been obtained from a LiDAR system, are assumed to be at the same or similar distance from a reference point. In this fashion, a three-dimensional representation from a lower resolution LiDAR system can be used with a high-density image to obtain a 3D map having a resolution higher than the resolution provided by the LiDAR system.
A practical approach in generating a 3D map is to determine which data points in the three-dimensional LiDAR representation, correspond to which pixels or groups of pixels in the high-density image. In other words, a registration should be achieved such that a data point in the LiDAR representation and a corresponding pixel or group of pixels represent the same object in the scene. Such registration operation is challenging in instances where the three-dimensional LiDAR representation of the environment is non-uniform, for instance as a result of using a variable magnification wide-angle lens, where some portions of the representation are at a higher resolution than others or otherwise distorted such that the distance from one data point to another in the LiDAR representation is not necessarily the same as the distance from one pixel to another in the image.
At step 3300 of the process the computer device compensates for the distortion in the three-dimensional representation of the LiDAR data. Since the distortion model is known, namely the magnification pattern of the lens, the parts of the representation that have been distorted in relation to other parts can be undistorted fully or in part. Examples of distortion correction include:
Alternatively, the image data can be distorted in a way which is consistent with the distortion of the LiDAR three-dimensional data, allowing to register both data sets. One way to achieve the distortion is to use a magnification lens 3212 for the image sensor 3208 which has the same magnification pattern as the lens 3002. In this fashion both data sets can be registered to establish correspondence between the data points and eventual merge. Another option is to perform the distortion through data processing by the computer device 3206.
At step 3302, the compensated LiDAR data is merged with the image data. For example, the process described in the U.S. Pat. No. 10,445,928 in the name of Vaya Vision, Ltd., the entire contents of which in incorporated herein by reference, can be used for that purpose.
In the dual sensor system architecture of
Furthermore, an architecture like that shown in
Certain additional elements that may be needed for operation of some embodiments have not been described or illustrated as they are assumed to be within the purview of those of ordinary skill in the art. Moreover, certain embodiments may be free of, may lack and/or may function without any element that is not specifically disclosed herein.
Any feature of any embodiment discussed herein may be combined with any feature of any other embodiment discussed herein in some examples of implementation.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the inventive concept. This description should be read to include one or more and the singular also includes the plural unless it is obvious that it is meant otherwise.
Further, use of the term “plurality” is meant to convey “more than one” unless expressly stated to the contrary.
As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Circuitry, as used herein, may be analog and/or digital, components, or one or more suitably programmed microprocessors and associated hardware and software, or hardwired logic. Also, “components” may perform one or more functions. The term “component,” may include hardware, such as a processor, an application specific integrated circuit (ASIC), or a field programmable gate array (FPGA), or a combination of hardware and software. Software includes one or more computer executable instructions that when executed by one or more component cause the component to perform a specified function. It should be understood that the algorithms described herein are stored on one or more non-transitory memory. Exemplary non-transitory memory includes random access memory, read only memory, flash memory or the like. Such non-transitory memory may be electrically based or optically based.
As used herein, the term “substantially” means that the subsequently described parameter, event, or circumstance completely occurs or that the subsequently described parameter, event, or circumstance occurs to a great extent or degree. For example, the term “substantially” means that the subsequently described parameter, event, or circumstance occurs at least 90% of the time, or at least 91%, or at least 92%, or at least 93%, or at least 94%, or at least 95%, or at least 96%, or at least 97%, or at least 98%, or at least 99%, of the time, or means that the dimension or measurement is within at least 90%, or at least 91%, or at least 92%, or at least 93%, or at least 94%, or at least 95%, or at least 96%, or at least 97%, or at least 98%, or at least 99%, of the referenced dimension or measurement.
In case of any discrepancy, inconsistency, or other difference between terms used herein and terms used in any document incorporated by reference herein, meanings of the terms used herein are to prevail and be used.
Although various embodiments and examples have been presented, this was for purposes of describing, but should not be limiting. Various modifications and enhancements will become apparent to those of ordinary skill and are within a scope of this disclosure.
This application is a continuation application of and claims the benefit of priority under 35 U.S.C. § 120 to U.S. application Ser. No. 17/382,144, filed on Jul. 21, 2021, which claims the benefit of U.S. Provisional Application No. 63/054,634, filed on Jul. 21, 2020, the contents of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3045231 | Emory | Jul 1962 | A |
3954335 | Bodlaj | May 1976 | A |
5126869 | Lipchak | Jun 1992 | A |
5128874 | Bhanu et al. | Jul 1992 | A |
5195144 | Parquier et al. | Mar 1993 | A |
5198657 | Trost et al. | Mar 1993 | A |
5298905 | Dahl | Mar 1994 | A |
5396510 | Wilson | Mar 1995 | A |
5471215 | Fukuhara | Nov 1995 | A |
5565870 | Fukuhara | Oct 1996 | A |
5587908 | Kajiwara | Dec 1996 | A |
5633901 | Bottman | May 1997 | A |
5699151 | Akasu | Dec 1997 | A |
5745806 | Saito | Apr 1998 | A |
5812249 | Johnson et al. | Sep 1998 | A |
5831717 | Ikebuchi | Nov 1998 | A |
5852491 | Kato | Dec 1998 | A |
5870178 | Egawa | Feb 1999 | A |
5896103 | Bunch | Apr 1999 | A |
5923417 | Leis | Jul 1999 | A |
5933225 | Yamabuchi | Aug 1999 | A |
5987395 | Donges | Nov 1999 | A |
6100539 | Blumcke | Aug 2000 | A |
6115112 | Hertzman | Sep 2000 | A |
6115114 | Berg et al. | Sep 2000 | A |
6252655 | Tanaka | Jun 2001 | B1 |
6323941 | Evans et al. | Nov 2001 | B1 |
6502053 | Hardin et al. | Dec 2002 | B1 |
6522393 | Higashino | Feb 2003 | B2 |
6553130 | Son et al. | Apr 2003 | B1 |
6587185 | Ide | Jul 2003 | B1 |
6606054 | Okamura | Aug 2003 | B2 |
6650403 | Ogawa | Nov 2003 | B2 |
6657704 | Shirai | Dec 2003 | B2 |
6665057 | Schellmann | Dec 2003 | B2 |
6710859 | Shirai | Mar 2004 | B2 |
6765495 | Dunning et al. | Jul 2004 | B1 |
6829043 | Lewis | Dec 2004 | B2 |
6847462 | Kacyra et al. | Jan 2005 | B1 |
6850156 | Bloomfield et al. | Feb 2005 | B2 |
6897465 | Remillard | May 2005 | B2 |
6989781 | Steinbuch | Jan 2006 | B2 |
7023531 | Gogolla | Apr 2006 | B2 |
7068214 | Kakishita | Jun 2006 | B2 |
7177014 | Mori | Feb 2007 | B2 |
7221271 | Reime | May 2007 | B2 |
7350945 | Albou et al. | Apr 2008 | B2 |
7385626 | Aggarwal et al. | Jun 2008 | B2 |
7417718 | Wada et al. | Aug 2008 | B2 |
7508496 | Mettenleiter et al. | Mar 2009 | B2 |
7619754 | Reil et al. | Nov 2009 | B2 |
7852461 | Yahav | Dec 2010 | B2 |
7957639 | Lee et al. | Jun 2011 | B2 |
7957900 | Chowdhary et al. | Jun 2011 | B2 |
8045249 | Kobayashi et al. | Oct 2011 | B2 |
8189051 | Shih et al. | May 2012 | B2 |
8290208 | Kurtz et al. | Oct 2012 | B2 |
8446492 | Nakano et al. | May 2013 | B2 |
8457827 | Ferguson et al. | Jun 2013 | B1 |
8547374 | Sadjadi et al. | Oct 2013 | B1 |
8548229 | Badino et al. | Oct 2013 | B2 |
8587686 | Riza et al. | Nov 2013 | B1 |
8723717 | Saito | May 2014 | B2 |
8736818 | Weimer | May 2014 | B2 |
8761594 | Gross et al. | Jun 2014 | B1 |
8791851 | Elad et al. | Jul 2014 | B2 |
8908159 | Mimeault | Dec 2014 | B2 |
8982313 | Escuti et al. | Mar 2015 | B2 |
8996224 | Herbach | Mar 2015 | B1 |
9063549 | Pennecot | Jun 2015 | B1 |
9098754 | Stout | Aug 2015 | B1 |
9164511 | Ferguson et al. | Oct 2015 | B1 |
9188980 | Anderson | Nov 2015 | B2 |
9774789 | Ciurea | Sep 2017 | B2 |
10098727 | Galstian | Oct 2018 | B1 |
RE47134 | Mimeault | Nov 2018 | E |
10412368 | Osterwood | Sep 2019 | B2 |
10571552 | Gao | Feb 2020 | B1 |
10825010 | Olmstead | Nov 2020 | B2 |
10832438 | Gozu | Nov 2020 | B2 |
10884278 | Hegyi | Jan 2021 | B2 |
11022857 | Lee | Jun 2021 | B2 |
11061406 | Mao | Jul 2021 | B2 |
11087494 | Srinivasan | Aug 2021 | B1 |
20010024271 | Takayanagi | Sep 2001 | A1 |
20010045981 | Gloger et al. | Nov 2001 | A1 |
20020097995 | Nakata | Jul 2002 | A1 |
20020141618 | Ciolli | Oct 2002 | A1 |
20030193642 | Tominaga et al. | Oct 2003 | A1 |
20040035620 | McKeeferey | Feb 2004 | A1 |
20040135992 | Munro | Jul 2004 | A1 |
20040164946 | Cavanaugh et al. | Aug 2004 | A1 |
20050117364 | Rennick et al. | Jun 2005 | A1 |
20050200832 | Kawai et al. | Sep 2005 | A1 |
20050269481 | David et al. | Dec 2005 | A1 |
20060072099 | Hoashi | Apr 2006 | A1 |
20060147089 | Han et al. | Jul 2006 | A1 |
20060149472 | Han et al. | Jul 2006 | A1 |
20060186702 | Kisanuki et al. | Aug 2006 | A1 |
20060274545 | Rosenstein | Dec 2006 | A1 |
20060274918 | Amantea et al. | Dec 2006 | A1 |
20070024841 | Kloza | Feb 2007 | A1 |
20070091294 | Hipp | Apr 2007 | A1 |
20070097349 | Wada | May 2007 | A1 |
20070165967 | Ando | Jul 2007 | A1 |
20070181810 | Tan | Aug 2007 | A1 |
20070187573 | Aoki | Aug 2007 | A1 |
20070189455 | Allison | Aug 2007 | A1 |
20070255525 | Lee | Nov 2007 | A1 |
20080046150 | Breed | Feb 2008 | A1 |
20080077327 | Harris | Mar 2008 | A1 |
20080199165 | Ng et al. | Aug 2008 | A1 |
20080297870 | Kobayashi et al. | Dec 2008 | A1 |
20090102699 | Behrens et al. | Apr 2009 | A1 |
20090109082 | Rose | Apr 2009 | A1 |
20090251680 | Farsaie | Oct 2009 | A1 |
20100014781 | Liu et al. | Jan 2010 | A1 |
20100040285 | Csurka et al. | Feb 2010 | A1 |
20100091263 | Kumagai | Apr 2010 | A1 |
20100157280 | Kusevic | Jun 2010 | A1 |
20100191117 | Kabakov | Jul 2010 | A1 |
20100204974 | Israelsen | Aug 2010 | A1 |
20100211247 | Sherony | Aug 2010 | A1 |
20100235129 | Sharma | Sep 2010 | A1 |
20100245535 | Mauchly | Sep 2010 | A1 |
20100315618 | Hertzman | Dec 2010 | A1 |
20110026008 | Gammenthaler | Feb 2011 | A1 |
20110081043 | Sabol | Apr 2011 | A1 |
20110134249 | Wood et al. | Jun 2011 | A1 |
20110141306 | Nakano et al. | Jun 2011 | A1 |
20110292406 | Hollenbeck et al. | Dec 2011 | A1 |
20120021595 | Kim | Jan 2012 | A1 |
20120026510 | Crampton et al. | Feb 2012 | A1 |
20120035788 | Trepagnier | Feb 2012 | A1 |
20120038902 | Dotson | Feb 2012 | A1 |
20120044093 | Pala | Feb 2012 | A1 |
20120044476 | Earhart et al. | Feb 2012 | A1 |
20120148100 | Kotake et al. | Jun 2012 | A1 |
20120188467 | Escuti et al. | Jul 2012 | A1 |
20120206627 | Reschidko et al. | Aug 2012 | A1 |
20120229304 | Dalal | Sep 2012 | A1 |
20120214037 | Nehmadi | Dec 2012 | A1 |
20120310518 | Chen et al. | Dec 2012 | A1 |
20120326959 | Murthi et al. | Dec 2012 | A1 |
20130021595 | Guetta | Jan 2013 | A1 |
20130050430 | Lee | Feb 2013 | A1 |
20130107065 | Venkatraman et al. | May 2013 | A1 |
20130174102 | Leu | Jul 2013 | A1 |
20140077988 | Saito | Mar 2014 | A1 |
20140078263 | Kim | Mar 2014 | A1 |
20140094307 | Doolittle et al. | Apr 2014 | A1 |
20140118716 | Kaganovich | May 2014 | A1 |
20140132722 | Bauza et al. | May 2014 | A1 |
20140139639 | Wagner et al. | May 2014 | A1 |
20140240464 | Lee | Aug 2014 | A1 |
20140267631 | Powers | Sep 2014 | A1 |
20140280230 | Masato et al. | Sep 2014 | A1 |
20140358429 | Shutko | Dec 2014 | A1 |
20150071541 | Qutub et al. | Mar 2015 | A1 |
20150285912 | Hammes | Oct 2015 | A1 |
20150310273 | Shreve | Oct 2015 | A1 |
20150340875 | Prasad et al. | Nov 2015 | A1 |
20150356357 | McManus et al. | Dec 2015 | A1 |
20150362587 | Rogan | Dec 2015 | A1 |
20150379766 | Newman | Dec 2015 | A1 |
20160018526 | Van Den Bossche | Jan 2016 | A1 |
20160047903 | Dussan | Feb 2016 | A1 |
20160104289 | Chang | Apr 2016 | A1 |
20160144695 | Higgins | May 2016 | A1 |
20160180530 | Friend | Jun 2016 | A1 |
20160214607 | Dolgov et al. | Jul 2016 | A1 |
20160295196 | Finn | Oct 2016 | A1 |
20170003392 | Bartlett et al. | Jan 2017 | A1 |
20170124781 | Douillard | May 2017 | A1 |
20170160600 | Galstian et al. | Jun 2017 | A1 |
20170246990 | Rosenblum | Aug 2017 | A1 |
20170269198 | Hall et al. | Sep 2017 | A1 |
20170328990 | Magee et al. | Nov 2017 | A1 |
20170371227 | Skirlo et al. | Dec 2017 | A1 |
20180081037 | Medina | Mar 2018 | A1 |
20180113200 | Steinberg et al. | Apr 2018 | A1 |
20180114388 | Nagler | Apr 2018 | A1 |
20180136321 | Verghese | May 2018 | A1 |
20180136540 | Park | May 2018 | A1 |
20180188359 | Droz | Jul 2018 | A1 |
20180189977 | Zecchini | Jul 2018 | A1 |
20180284286 | Eichenholz et al. | Oct 2018 | A1 |
20180293445 | Gao | Oct 2018 | A1 |
20180364334 | Xiang | Dec 2018 | A1 |
20190011541 | O'Keeffe | Jan 2019 | A1 |
20190025427 | O'Keeffe | Jan 2019 | A1 |
20190075281 | Hall | Mar 2019 | A1 |
20190121191 | Hegyi | Apr 2019 | A1 |
20190176844 | Sedlmayr | Jun 2019 | A1 |
20190219675 | Yoon | Jul 2019 | A1 |
20190219681 | Atshushi | Jul 2019 | A1 |
20190227175 | Steinberg | Jul 2019 | A1 |
20190271767 | Keilaf | Sep 2019 | A1 |
20190317217 | Day et al. | Oct 2019 | A1 |
20190318177 | Steinberg | Oct 2019 | A1 |
20190353784 | Toledano | Nov 2019 | A1 |
20200013181 | Uyeno | Jan 2020 | A1 |
20200033454 | Hong et al. | Jan 2020 | A1 |
20200072950 | Phillip | Mar 2020 | A1 |
20200099824 | Benemann | Mar 2020 | A1 |
20200099872 | Benemann | Mar 2020 | A1 |
20200284883 | Ferreira | Sep 2020 | A1 |
20200353939 | Meng | Nov 2020 | A1 |
20210003711 | Vandenberg | Jan 2021 | A1 |
20210025997 | Rosenzweig | Jan 2021 | A1 |
20210041712 | Bilik et al. | Feb 2021 | A1 |
20210063841 | Yuan | Mar 2021 | A1 |
20210080575 | Nehmadi | Mar 2021 | A1 |
20210124367 | Lim | Apr 2021 | A1 |
20210129868 | Nehmadi | May 2021 | A1 |
20210190958 | Nonaka | Jun 2021 | A1 |
20210208263 | Sutavani | Jul 2021 | A1 |
20210255637 | Kale | Aug 2021 | A1 |
20210293931 | Nemet | Sep 2021 | A1 |
20220026539 | Bernier | Jan 2022 | A1 |
20220026540 | Olivier | Jan 2022 | A1 |
20220026573 | Baribault | Jan 2022 | A1 |
20220026576 | Baribault | Jan 2022 | A1 |
Number | Date | Country |
---|---|---|
2710212 | Jul 2009 | CA |
2782180 | Jun 2011 | CA |
106462949 | Feb 2017 | CN |
3535391 | May 1990 | DE |
10361869 | Jul 2005 | DE |
102019132239 | Jun 2021 | DE |
2204670 | Jun 2014 | EP |
H04172285 | Jun 1992 | JP |
912723 | May 1997 | JP |
2005170184 | Jun 2005 | JP |
2006521536 | Sep 2006 | JP |
2007121116 | May 2007 | JP |
09178786 | Aug 2009 | JP |
09222476 | Oct 2009 | JP |
2010091378 | Apr 2010 | JP |
2010529932 | Sep 2010 | JP |
2010286307 | Dec 2010 | JP |
11101637 | May 2011 | JP |
WO1991007672 | May 1991 | WO |
WO2000012960 | Mar 2000 | WO |
WO2005008271 | Jan 2005 | WO |
WO2008017316 | Feb 2008 | WO |
WO2008070319 | Jun 2008 | WO |
WO2011014743 | Feb 2011 | WO |
WO2011077400 | Jun 2011 | WO |
WO2018055449 | Mar 2018 | WO |
WO2019106429 | Jun 2019 | WO |
WO2019197894 | Oct 2019 | WO |
Entry |
---|
Atiq et al., “Vehicle Detection and Shape Recognition Using Optical Sensors: A Review”, 2010 Second International Conference on Machine Learning and Computing, Feb. 11, 2010 (Nov. 2, 2010). |
Baig et al., “Fusion Between Laser and Stereo Vision Data For Moving Objects Tracking in Intersection Like Scenario”, Fusion Between IV'2011—IEEE Intelligent Vehicles Symposium, Jun. 2011, Baden-Baden, Germany. pp. 262-367, ff10.1109/IVS.2011.5940576ff. |
Braun et al., “Nanosecond transient electroluminescence from polymer light emitting diodes”, Applied Physics Letters, vol. 61(26):3092-3094 (Dec. 1992). |
CA Requisition in Canadian Appln. No. 3125618, dated Oct. 15, 2021, 4 pages. |
CA Requisition in Canadian Appln. No. 3125623, dated Nov. 1, 2021, 4 pages. |
CA Requisition in Canadian Appln. No. 3125716, dated Oct. 18, 2021, 4 pages. |
CA Requisition in Canadian Appln. No. 3125718, dated Nov. 25, 2021, 3 pages. |
Canadian Examiners Report in CA Appln. No. 2865733, dated May 31, 2021, 3 pages. |
English translation of the Notification of Reasons for Rejection issued in Japanese Patent Application No. 2018077339, dated Mar. 25, 2019, 8 pages. |
EP Search Report in EP Appln. No. 16774190.9, dated Jan. 28, 2019. |
Escuti, Michael J. and W. Michael Jones, “Polarization-Independent Switching With High Contrast From A Liquid Crystal Polarization Grating”, Society for Information Display, 2006. |
Final Office Action dated Apr. 18, 2018 in connection with U.S. Appl. No. 15/373,189, 9 pages. |
International Preliminary Report on Patentability in International Application No. PCT/IB2013/051667, dated Sep. 2, 2014, 6 pages. |
International Search Report and Written Opinion for PCT/US2016/025252, Moscow, Russia, dated Aug. 11, 2016. |
International Search Report and Written Opinion in International Application No. PCT/IB2013/051667, dated Jul. 9, 2013, 8 pages. |
Kim et al., “Wide-Angle, Nonmechanical Beam Steering Using Thin Liquid Crystal Polarization Gratings”, Advanced Wavefront Control: Methods, Devices and Applicatinos VI, 2008, 7093:709302-1-12. |
Non-Final Office Action dated Oct. 31, 2017 in connection with U.S. Appl. No. 15/373,189, 31 pages. |
Notice of Allowance dated Jul. 13, 2018 in connection with U.S. Appl. No. 15/373,189 (13 pages). |
Notice of Allowance dated Mar. 8, 2018 in connection with U.S. Appl. No. 14/984,704, (8 pages). |
Office Action in U.S. Appl. No. 17/382,144, dated Nov. 19, 2021, 17 pages. |
Office Action in U.S. Appl. No. 17/382,155, dated Dec. 24, 2021, 23 pages. |
Office Action in U.S. Appl. No. 17/382,163, dated Jan. 13, 2022, 31 pages. |
Office Action in U.S. Appl. No. 17/382,177, dated Dec. 21, 2021, 25 pages. |
Office Action dated Jun. 15, 2017 in connection with U.S. Appl. No. 14/984,704, (13 pages); and. |
Office Action dated Oct. 9, 2019 in connection with U.S. Appl. No. 16/011,820 (25 pages). |
Office Action dated Sep. 17, 2019 in connection with U.S. Appl. No. 15/867,995 (38 pages). |
PCT International Preliminary Report on Patentability in International Appln. No. PCT/IL2018/050102, dated Aug. 6, 2019, 10 pages. |
PCT International Search Report and Written Opinion in International Appln. No. PCT/CA2021/051010, dated Oct. 4, 2021, 18 pages. |
PCT International Search Report and Written Opinion in International Appln. No. PCT/CA2021/051011, dated Oct. 6, 2021, 15 pages. |
PCT International Search Report and Written Opinion in International Appln. No. PCT/CA2021/051012, dated Nov. 2, 2021, 11 pages. |
PCT International Search Report and Written Opinion in International Appln. No. PCT/CA2021/051013, dated Oct. 21, 2021, 21 pages. |
PCT International Search Report and Written Opinion in International Appln. No. PCT/IL2018/50102, dated Aug. 8, 2018, 14 pages. |
Petrovskaya et al., “Awareness of Road Scene Participants for Autonomous Driving”, Stanford University (USA), INRIA (France), Coimbra University (Portugal), University of Frieburg (Germany), University of Oxford (UK), Ohio Northern University (USA), Oct. 12, 2011. |
Supplemental Notice of Allowability dated Sep. 12, 2018 in connection with U.S. Appl. No. 15/373,189 (4 pages). |
Number | Date | Country | |
---|---|---|---|
20220244386 A1 | Aug 2022 | US |
Number | Date | Country | |
---|---|---|---|
63054634 | Jul 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17382144 | Jul 2021 | US |
Child | 17705854 | US |