3D sensors play an important role in the deployment of many autonomous systems, including field robots and self-driving cars. However, there are many tasks for which it may not be necessary to use a full-blown 3D scanner. As an example, a self-guided vehicle on a road or a robot in the field does not need a full-blown 3D depth sensor to detect potential collisions or to monitor its blind spot. Instead, what is necessary is for the vehicle to be able to detect any object that comes within a pre-defined perimeter of the vehicle to allow for collision avoidance. This is a much easier task than full depth scanning and object identification.
Consider a robot that is maneuvering a dynamic terrain. While full 3D perception is important for long-term path planning, it is less useful for time-critical tasks like obstacle detection and avoidance. Similarly, in autonomous driving, collision avoidance is a task that must be continuously performed but does not require full 3D perception of the scene. For such tasks, a proximity sensor with much reduced energy and computational footprint may be sufficient.
This summary is presented to provide a basic understanding of some novel embodiments described herein. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope of the invention. Some concepts are presented in a simplified form as a prelude to the more detailed description that is presented later.
Various embodiments are generally directed to a device that monitors the presence of objects passing through or impinging on a virtual shell near the device, which is referred to herein as a “light curtain”. Light curtains offer a lightweight, resource-efficient and programmable approach for proximity awareness for obstacle avoidance and navigation. They also have additional benefits in terms of improving visibility in fog as well as flexibility in handling light fall-off.
In one embodiment, the light curtains are created by rapidly rotating a line sensor and a line laser in synchrony. The embodiment is capable of generating light curtains of various shapes with a range of 20-30 m in sunlight (40 m under cloudy skies and 50 m indoors) and adapts dynamically to the demands of the task.
In one embodiment, light curtains may be implemented by triangulating an illumination plane, created by fanning out a laser, with a sensing plane of a line sensor. In the absence of ambient illumination, the sensor senses light only from the intersection between these two planes, which, in the physical world, is a line. The light curtain is then created by sweeping the illumination and sensing planes in synchrony.
In another embodiment, the light curtains may have a programmable shape to allow for the detection of objects along a particular perimeter, such as for detecting a vehicle impinging on a lane for a self-driving vehicle.
In yet another embodiment, the capability of the light curtain can be enhanced by using correlation-based time-of-flight (ToF) sensors.
The light curtain is capable of detecting the presence of objects that intersect a virtual shell around the system. By detecting only the objects that intersect with the virtual shell, many tasks pertaining to collision avoidance and situational awareness can be solved with little or no computational overhead.
Light curtains provide a novel approach for proximity sensing and collision avoid-ance that has immense benefits for autonomous devices. The shape of the light curtain can be changed on the fly and can be used to provide better detections, especially under strong ambient light (like sunlight) as well as global illumination (like fog).
The Detailed Description of the invention will be better understood when read in conjunction with the figures appended hereto. For the purpose of illustrating the invention, a preferred embodiment is shown in the drawings. It is understood, however, that this invention is not limited to these embodiments or the precise arrangements shown.
The invention is described in detail below. Implementations used in the description of the invention, including implementations using various components or arrangements of components, should be considered exemplary only and are not meant to limit the invention in any way. As one of skill in the art would realize, many variations on implementations discussed herein which fall within the scope of the invention are possible. Accordingly, the exemplary methods and apparatuses disclosed herein are not to be taken as limitations on the invention, but as an illustration thereof.
In one embodiment, the device consists of a line scan laser (illumination module 100) and a line scan sensor (sensor module 120) as shown in a top view in
To steer the light beam, a steerable galvo mirror 108 is used. In preferred embodiments, the galvo mirror has a dimension of 11 mm×7 mm and has a 22.5° mechanical angle, providing the sensor and laser with a 45° field of view. The galvo mirror 108 takes approximately 500 μs to rotate through a 0.2° optical angle. A micro-controller is used to synchronize the sensor, the laser and the galvo mirrors 108, 126. In preferred embodiments, the galvo mirror 108 used for the illumination module 100 and the galvo mirror 126 use for sensor module 120 will be identical. In alternate embodiments, a mechanical motor may be used to steer the light beam and sensor. In yet other embodiments, a 2D sensor and with a rolling shutter or a region of interest mask may be used to effectively emulate a faster line sensor.
Sensor module 120 comprises a line sensor 122, lens 124 and steerable galvo mirror 126. In one embodiment, line sensor 122 is a line scan intensity sensor. In one embodiment, the line scan intensity sensor is a 6 mm f/2 S-mount lens having a diagonal field-of-view of 45° and an image circle 7 mm in diameter. The line sensor may have 2048×2 total pixels with the pixel size being approximately 7 μm×7 μm. In preferred embodiments, only the central 1000 pixels of the sensor are used due to the limited circle of illumination of the lens. Preferably, the line scan sensor may be capable of scanning 95,000 lines per second and may be fitted with an optical bandpass filter having a 630 nm center wavelength and a 50 nm bandwidth suppress ambient light.
In preferred embodiments, the rotation axes are aligned to be parallel and fixed with a baseline of 300 mm. The resulting field-of-view of the system is approximately 45° by 45°.
The Powell lens 106 fans the laser beam into a planar sheet of light and the line sensor 122 senses light from a single plane. In the general configuration, the two planes intersect at a line in 3D, as shown in a perspective view
When operated in strong ambient light, for example, sunlight, the sensor 122 also measures the contribution from the ambient light illuminating the entire scene. To suppress this, two image captures are performed at sensor 122 for each setting of the galvos, one with and one without illumination from laser 102, each with exposure of 100 μs. The images may then be subtracted to filter out the ambient light.
In one embodiment, galvo mirrors 108, 122 may take time to stabilize after rotation. The stabilization time may be as much as 500 μs before the mirrors are stable enough to capture a line. This limits the overall frame-rate of the device. Adding two 100 μs exposures for laser on and off to filter out ambient light allows a display of 1400 lines per second. If the light curtains are designed to contain 200 lines, the entire light curtain can be refreshed at a rate of 5.6 fps. Galvo mirrors which stabilize in times shorter than 500 μs would allow the curtain refresh rates to reach 20 to 30 fps.
Light Curtains with Parallel Axes of Rotation—
The case where the sensor and laser can each be rotated about a single fixed parallel axis, as shown in
p0+ur
where po is any 3D point on the line and u∈(−α, α) is the offset along the axis of rotation (see
s(t,u)−p(t)+ur
where p(t)∈3 is a 3D path that describes the points scanned by the center pixel on the line sensor and t∈[0,1] is the parameterization of the path.
Given a light curtain s(t, u), the rotation angles of the sensor and laser can be computed. Without loss of generality, it is assumed that the origin of the coordinate system is at the midpoint between the centers of the line sensor and the laser. It is further assumed that the rotation axes are aligned along the y-axis and that the 3D path can be written as p(t)=[x(t), 0, z(t)]T. To achieve this light curtain, suppose that the laser rotates about its axis with an angular profile of θp(t), where the angle is measured counter-clockwise with respect to the x-axis. Similarly, the line sensor rotates with an angular profile of θc(t). Let b be the baseline between the laser and the line sensor. θc(t) and θp(t) can be derived as:
General Light Curtains—
The light curtain device can also be configured with line sensor 122 and laser 102 rotating over non-parallel axes or with each of them enjoying full rotational degrees of freedom. These configurations have their own unique advantages. When the devices have full rotational freedom, i.e., capable of rotating around a point with no restrictions, then any ruled surface (including for example, a mobius strip) can be generated as a light curtain. Full rotational freedom, however, is hard to implement because multi-axis galvos or gimbals are needed and are often cost-prohibitive.
Rotation Over Two Axes—
When the line sensor and line laser rotate over two axes, lc and lp, respectively, given a 3D path p(t), the generated ruled surface can be determined. Each line in the surface should not only go through p(t), but also be co-planar to lc and lp simultaneously. The parametric form of the generated light curtain s(t, u)⊂3 can be written as:
s(t,u)=p(t)+ur(t)
where u is a scalar and r(t)∈3 is the direction of the line which is analyzed in the following from easy to general conditions.
When lc and lp are non-coplanar, as shown in
If there is no intersection, meaning lp is parallel to this plane, r(t) should be in the direction of lp.
Given a 3D path p(t), the rotation angles of the line sensor and line laser can be computed as follows. Without loss of generality, assume that the origin of the coordinate system is at the midpoint between the centers of the line sensor 122 and line laser 102. The distance between two centers is b, and the directions of lc and lp are M and N∈3 respectively. The given 3D path is at the y=0 plane and can be written as p(t)=[x(t), 0, z(t)]T. The rotation angle of sensor 122, which is measured counter-clockwise with respect to the xy-plane, is:
and the rotation angle of laser 102 is similar. They are:
When lc and lp are co-planar, the equation can be simplified. Without loss of generality, assume both are co-planar at the xy-plane, and are ±γ to the y-axis,
and when lc∥lp, γ=0, this can be simplified further as:
ii
Rotation Over Two Points—
When line sensor 122 and line laser 102 can rotate over two points respectively (full rotational degree of freedom), any ruled surface can be generated.
Optimizing Light Curtains—
Parameters of interest in practical light curtains can be quantified, for example, their thickness and SNR of measured detections, and approaches to optimize them are presented herein. Of particular interest is the minimizing of the thickness of the curtain as well as optimizing exposure time and laser power for improved detection accuracy when the curtain spans a large range of depths.
Thickness of Light Curtain—
The light curtain produced by device described herein has a finite thickness due to the finite size of the sensor pixels and finite thickness of the laser illumination. Suppose that the laser spot has a thickness of ΔL meters and each pixel has an angular extent of δc radians. Given a device with a baseline of length b meters, imaging a point at depth z(t)=z, then the thickness of the light curtain is given as an area of a parallelogram shaded in
where rc and rp is the distance between the intersected point and the sensor and laser, respectively.
Given that different light curtain geometries can produce curtains of the same area, a more intuitive and meaningful metric for characterizing the thickness is the length:
In any given system, changing the laser thickness ΔL, requires changing the optics of the illumination module. Similarly, changing δc requires either changing the pixel pitch or the field-of-view of the sensor. In contrast, varying the base-line provides an easier alternative to changing the thickness of the curtain that involves a single translation. This is important because different applications often have differing needs regarding the thickness of the curtain. A larger baseline helps in achieving very thin curtains, which is important when there is a critical need to avoid false alarms. On the other hand, thick curtains that can be achieved by having a smaller baseline are important in scenarios where it is critical to avoid mis-detections. Further, a sufficiently thick curtain also helps in avoiding mis-detections caused by the inherent discreteness.
Minimizing the Thickness and Energy for Nearby Light Curtains—
When a light curtain is far away, the largest possible baseline can be used to minimize uncertainty, and regardless of how the device is configured, the consumed energy is close. But, when the light curtain is nearby, the best configuration is nontrivial.
Given the curtain shape in the xz plane p(τ), the optimization problem minimizing thickness by triangulation can be formalized as following:
where rc(τ) and rp(τ) are the distance from p(τ) to sensor rotation center [C, 0,0] and laser rotation center [P, 0,0] respectively, z(τ) is the depth of p(τ), and
is the range through which the rotate center of the sensor and laser can position. For simplicity, only consider a cross-section of a light curtain in the xz plane.
When the objective is to minimize energy, the problem becomes
When the objective is to minimize one subject to the other one smaller than some value, like
the optimization result is hard to predict.
Adapting Laser Power and Exposure—
A key advantage of the light curtain device is that the power of the laser or the exposure time can be adapted for each intersecting line to compensate for light fall-off, which is inversely proportional to the square of the depth. In a traditional projector-sensor system, it is commonplace to increase the brightness of the projection to compensate for light fall-off, so that far-away points in the scene can be well illuminated. However, this would imply that points in the scene close to be sensor get saturated easily. This would further imply a need for a high dynamic range sensor as well as reduced resource efficiency due to the need for sending strong light to nearby scenes.
In contrast, system of the present invention has an additional degree of freedom wherein the power of the laser and/or the exposure time of the sensor can be adjusted according to depth such that light fall-off is compensated to the extent possible under the device constraints and with respect to eye safety. Further, because the system only detects the presence or absence of objects, in an ideal scenario where albedo is the same everywhere, the laser can send small amounts of light to just overcome the readout noise of the sensor or the photon noise of ambient light, and only a 1-bit sensor is required.
Combining with Time-of-Flight Sensors—
The analysis above also indicates that the light curtain can be expected to get thicker, quadratically, with depth. Increasing baseline and other parameters of the system can only alleviate this effect in part due to the physical constraints on sensor size, laser spot thickness as well as the baseline. Replacing the line intensity sensor with a 1D continuous wave time-of-flight (CW-TOF) sensor alleviates the quadratic dependence of thickness with depth.
CW-TOF sensors measure phase to obtain depth. A CW-TOF sensor works by illuminating the scene with an amplitude modulated wave (typically, a periodic signal of frequency fm Hz) and measuring the phase between the illumination and the light intensity at each pixel. The phase difference ϕ and the depth d of the scene point are related as:
As a consequence, the depth resolution of a TOF sensor is constant and independent of depth. Further, the depth resolution increases with the frequency of the amplitude wave. However, TOF-based depth recovery has a phase wrapping problem due to the presence of the mod (⋅) operator, which implies that the depth estimate has an ambiguity problem. This problem gets worse at higher frequencies. In contrast, traditional triangulation-based depth estimation has no ambiguity problem, but at the cost of quadratic depth uncertainty.
The complementary strengths of traditional triangulation and CW-TOF can be leveraged to enable light curtains with near-constant thickness over a large range. Triangulation and phase are fused by measuring the phase (as with regular correlation-based ToF) in addition the usual measurement of intensity.
Knowing the depth of the curtain, the appropriate phase to retain and discard pixels with phase values that are significantly different can be calculated. An alternative approach achieves this by performing phase-based depth gating using appropriate codes at illumination and sensing. The use of triangulation automatically eliminates the depth ambiguity of phase-based gating provided the thickness of the triangulation is smaller than the wavelength of the amplitude wave. With this, it is possible to create thinner light curtains over a larger depth range.
See Through Scattering Media—
In traditional imaging, the sensor receives first-bounce light reflected from an object as well as single-scattered light. With light curtains, the line sensor 102 avoids single-scattered light and only receives multi-scattered light. The ratio between first-bounce light and global light is much higher, thus contrast is better.
The light curtain method and device described herein has many benefits. The shape of a light curtain is programmable and can be configured dynamically to suit the demands of the immediate task. For example, light curtains can be used to determine whether a vehicle is changing lanes in front, whether a pedestrian is in the crosswalk, or whether there are vehicles in neighboring lanes. Similarly, a robot might use a curtain that extrudes its planned (even curved) motion trajectory.
Given an energy budget, in terms of average laser power, exposure time, and refresh rate of the light curtain, higher power and exposure can be allocated to lines in the curtain that are further away to combat inverse square light fall-off. This is a significant advantage over traditional depth sensors that typically expend the same high power in all directions to capture a 3D point cloud in an entire volume.
The optical design of the light curtain shares similarities with confocal imaging in that small regions are selectively illuminated and sensed. When imaging in scattering media, such as fog and murky waters, this has the implicit advantage that many multi-bounce light paths are optically avoided, thereby providing images with increased contrast.
A key advantage of light curtains is that illumination and sensing can be concentrated to a thin region. Together with the power and exposure adaptability, this provides significantly better performance under strong ambient illumination, including direct sunlight, at large distances (i.e., 20-30 m). The performance increases under cloudy skies and indoors to 40 m and 50 m respectively.
At any instant, the sensor only captures a single line of the light curtain that often has small depth variations and hence, little variation in intensity fall-off. Thus, the dynamic range of the measured brightness can be low. A such, even a one-bit sensor with a programmable threshold would be ample for the envisioned tasks.
Many sensor types are available for use with the device. Any line sensor could be used with the described device including intensity sensors (CMOS, CCD, IuGaAs), time-of-flight (ToF) sensors (correlation, SPAD), and neuromorphic sensors (DVS).
The system may be run under control of one or more microprocessors in communication with memory containing software implementing the functions of the system. The movement of the galvo mirrors 108, 126 is under the control of the software to define the shape of the light curtain. The software may be configurable to allow the definition of light curtains of various shapes. In addition, the software may control the cycling of light source 102 as well as the timing of the reading of the data from line sensor 122 and the application of any filtering to the data, for example, the filtering of ambient light. Objects may be detected by breaking the light curtain, causing a variance in the light in the line of pixels sensed by line sensor 122. Upon detection of an object that has breached the light curtain, an alert may be raise and communicated off-unit.
To those skilled in the art to which the invention relates, many modifications and adaptations of the invention will suggest themselves. The exemplary methods and apparatuses disclosed herein are not to be taken as limitations on the invention, but as an illustration thereof. The intended scope of the invention is defined by the claims which follow.
This application is a national phase filing under 35 U.S.C. § 371 claiming the benefit of and priority to International Patent Application No. PCT/US2019/021569, filed on Mar. 11, 2019, which claims the benefit of U.S. Provisional Patent Application Ser. No. 62/761,479, filed Mar. 23, 2018. This application is a Continuation-In-Part of U.S. patent application Ser. No. 15/545,391, which is a national phase filing under 35 U.S.C. § 371 claiming the benefit of and priority to International Patent Application No. PCT/US2016/017942, filed on Feb. 15, 2016, which claims the benefit of U.S. Provisional Patent Application Ser. No. 62/176,352, filed Feb. 13, 2015. The entire contents of these applications are incorporated herein by reference.
This invention was made with government support under CNS1446601 awarded by the NSF, N000141512358 awarded by the ONR, N000141612906 awarded by the ONR, DTRT13GUTC26 awarded by the DOT, and HR00111620021 awarded by DARPA. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2019/021569 | 3/11/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/182784 | 9/26/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4621185 | Brown | Nov 1986 | A |
4687325 | Corby, Jr. | Aug 1987 | A |
4687326 | Corby, Jr. | Aug 1987 | A |
5128753 | Lemelson | Jul 1992 | A |
5717390 | Hasselbring | Feb 1998 | A |
5852672 | Lu | Dec 1998 | A |
6043905 | Kato | Mar 2000 | A |
6529627 | Callari et al. | Mar 2003 | B1 |
6556307 | Norita et al. | Apr 2003 | B1 |
7978928 | Tan et al. | Jul 2011 | B2 |
8400494 | Zalevsky et al. | Mar 2013 | B2 |
9500477 | Lee et al. | Nov 2016 | B2 |
9536320 | Prince | Jan 2017 | B1 |
9838611 | Haraguchi | Dec 2017 | B2 |
10021284 | Wang et al. | Jul 2018 | B2 |
10145678 | Wang et al. | Dec 2018 | B2 |
10359277 | Narasimhan | Jul 2019 | B2 |
20010035636 | Adachi | Nov 2001 | A1 |
20020014533 | Zhu et al. | Feb 2002 | A1 |
20040151345 | Morcom | Aug 2004 | A1 |
20060132752 | Kane | Jun 2006 | A1 |
20070242872 | Rudin et al. | Oct 2007 | A1 |
20080123939 | Wieneke | May 2008 | A1 |
20090066929 | Tropf | Mar 2009 | A1 |
20090201486 | Cramblitt et al. | Aug 2009 | A1 |
20100074532 | Gordon et al. | Mar 2010 | A1 |
20100085425 | Tan | Apr 2010 | A1 |
20100128221 | Muller et al. | May 2010 | A1 |
20100303299 | Cho et al. | Dec 2010 | A1 |
20110102763 | Brown et al. | May 2011 | A1 |
20110235018 | Mori et al. | Sep 2011 | A1 |
20110292347 | Zhang et al. | Dec 2011 | A1 |
20110299135 | Takabatake | Dec 2011 | A1 |
20110317005 | Atkinson | Dec 2011 | A1 |
20120008128 | Bamji | Jan 2012 | A1 |
20120062705 | Ovsiannikov et al. | Mar 2012 | A1 |
20120062963 | Gillham et al. | Mar 2012 | A1 |
20120200829 | Bronstein et al. | Aug 2012 | A1 |
20130010087 | Nieten et al. | Jan 2013 | A1 |
20130021474 | Taylor et al. | Jan 2013 | A1 |
20130127854 | Shpunt et al. | May 2013 | A1 |
20140055771 | Oggier | Feb 2014 | A1 |
20140055779 | Enami | Feb 2014 | A1 |
20140111616 | Blayvas | Apr 2014 | A1 |
20140125775 | Holz | May 2014 | A1 |
20140232566 | Mimeault | Aug 2014 | A1 |
20140247323 | Griffis et al. | Sep 2014 | A1 |
20140253724 | Yamagata | Sep 2014 | A1 |
20140328535 | Sorkine-Homung | Nov 2014 | A1 |
20150067929 | Blanton et al. | Mar 2015 | A1 |
20150176977 | Abele et al. | Jun 2015 | A1 |
20150177506 | Nishiwaki | Jun 2015 | A1 |
20150215547 | Muller | Jul 2015 | A1 |
20150281671 | Bloom et al. | Oct 2015 | A1 |
20150285618 | Haraguchi | Oct 2015 | A1 |
20150285625 | Deane | Oct 2015 | A1 |
20150294496 | Medasani et al. | Oct 2015 | A1 |
20150362698 | Lansel | Dec 2015 | A1 |
20160041266 | Smits | Feb 2016 | A1 |
20160065945 | Yin et al. | Mar 2016 | A1 |
20160124203 | Ryu | May 2016 | A1 |
20160198147 | Waligorski et al. | Jul 2016 | A1 |
20160209183 | Bakken et al. | Jul 2016 | A1 |
20160335778 | Smits et al. | Nov 2016 | A1 |
20160349369 | Lee et al. | Dec 2016 | A1 |
20170064235 | Wang et al. | Mar 2017 | A1 |
20170127036 | You et al. | May 2017 | A1 |
20170142406 | Ovsiannikov et al. | May 2017 | A1 |
20170272726 | Ovsiannikov | Sep 2017 | A1 |
20170310948 | Pei et al. | Oct 2017 | A1 |
20170353707 | Wang et al. | Dec 2017 | A1 |
20170366801 | Horesh | Dec 2017 | A1 |
20180094919 | Narasimham | Apr 2018 | A1 |
20180246189 | Smits | Aug 2018 | A1 |
20180252800 | Morcom | Sep 2018 | A1 |
20180307310 | Mccombe et al. | Oct 2018 | A1 |
20190025986 | Yamauchi | Jan 2019 | A1 |
20190236796 | Blasco Claret et al. | Aug 2019 | A1 |
Number | Date | Country |
---|---|---|
2002039716 | Feb 2002 | JP |
2012168049 | Sep 2012 | JP |
2015513825 | May 2015 | JP |
03016982 | Feb 2003 | WO |
2015003108 | Jan 2015 | WO |
2015119872 | Aug 2015 | WO |
2016131036 | Aug 2016 | WO |
Entry |
---|
Extended European search report for EP Application No. 18742111.0, dated Jul. 10, 2020, 6 pages. |
International Search Report and Written Opinion for International application No. PCT/US2018/014369, dated Apr. 26, 2018, 8 pages. |
Communication pursuant to Article 94(3) EPC for European Patent Application No. 18742111.0, dated Dec. 16, 2021, 5 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2019/21569, dated May 24, 2019. |
Achar, S., et al., “Epipolar Time-of-Flight Imaging”, ACM Transactions on Graphics, 36(4):37:1-37:2, Jul. 4, 2017. |
Wang, J., et al., “Programmable Triangulation Light Curtains”, ECCV Computer Vision Foundation, [online] <URL: http://openaccess.thecvl.com/content_ECCV _ 2018/papers/Jian_ Wang_ Programmable_Lght_ Curtains_ECCV _ 2018_paper.pdf>, pp. 1-14 (2018). |
O 'Toole, M., et al., “Homogeneous Codes for Energy-Efficient Illumination and Imaging” (2015). |
Tadano, R., et al., “Depth Selective Camera: A Direct, On-chip, Programmable Technique for Depth Selectivity in Photography”, IEEE International Conference on Computer Vision (2015). |
Blais, F., “Review of 20 Years of Range Sensor Development”, Journal of Electronic Imaging, 13(1): 231-240, Jan. 2004. |
Heckman, P. J., “Underwater Range Gated Photography”, Proc. SPIE 0007, Underwater Photo Optics I, Jun. 1, 1966, [online] https://www.spiedigitallibrary.org/conference-proceedings-of-spie on Jan. 28, 2019. |
Jarvis, R. A., “A Perspective on Range Finding Techniques for Computer Vision”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-5, No. 2, pp. 122-139, Mar. 1983. |
Barry, A.J., et al., “Pushbroom Stereo for High-Speed Navigation in Cluttered Environments”, IEEE International Conference on Robotics and Automation (ICRA), pp. 3046-3052, (2015). |
International Search Report and Written Opinion for International Patent Application No. PCT/US16/17942, dated May 19, 2016. |
International Preliminary Report on Patentability for International Patent Application No. PCT/US16/17942, dated Aug. 15, 2017. |
O'Toole et al. “3D Shape and Indirect Appearance by Structured Light Transport.” University of Toronto. 2014 (2014), pp. 1-3, 6-7 [on line] <URL: http://www.cs.toronto.edu/-kyros/pubs/14.cvpr slt.pdf>. |
O'Toole et al. “Prima-Dual Coding to Probe Light Transport.” ACM. 2012 (2012), p. 39:1-39:6 [online] <URL:http://www.cs.toronto.edu/-kyros/pubs/12.sig.pdc.pdf>. |
International Search Report and Written Opinion for International application No. PCT/US19/052854 dated Jul. 15, 2020, 8 pages. |
Extended European search report for Application No. EP19772623.5, dated Oct. 22, 2021, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20210033733 A1 | Feb 2021 | US |
Number | Date | Country | |
---|---|---|---|
62761479 | Mar 2018 | US | |
62176352 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15545391 | US | |
Child | 16470885 | US |