Three dimensional (3D) imaging time-of-flight (TOF) cameras are an active-type system. In general, systems are based on the phase-measurement technique of emitted intensity-modulated light, which is reflected by the scene. The reflected light is imaged onto a sensor. The photo-generated electrons are demodulated in the sensor, and based on the phase information, the distance for each pixel is deduced.
A major problem of a TOF system is that the sensor has to handle high dynamic ranges. The modulated signal received by the camera drops with the square of the distance. Furthermore, the reflectivity of the targets might vary to a large degree.
Due to this high requirement on dynamic range, stray light originating from the strong signal adding to the weak signal is a dominant problem for numerous applications of the TOF technology.
Stray light in TOF systems can also come from the camera itself.
Again, the stray light problem is exacerbated by the presence of relatively bright (reflective) objects 10 and dark (absorbing) objects 12 within the same scene. Light 16 from the bright object 10 contributes to the response detected by the pixels that receive the light 18 from the dark object 12 as illustrated by the internal reflections that give rise to the non-ideal path 14.
Signal A from object A (10) on pixel A: strong signal=large amplitude
Signal B from object B (12) on pixel B: weak signal=small amplitude
S=Signal due to stray light from object A on pixel B
B′=Resulting (measured) signal on pixel B
The invention proposes the use of more than one modulation frequency in order to compensate for multi-path measurement errors, that is means phase measurement errors caused by stray light and/or multiple reflections in the scene.
In general, according to one aspect, the invention features a time of flight three dimensional imaging system. This system comprises an illumination module that generates modulated light that is intensity modulated at a first frequency and at a second frequency, a sensor that detects the modulated light at the first frequency and at the second frequency, and a controller that generates a three dimensional image from the detected modulated light and compensates for multipath error in the three dimensional image.
In an embodiment, the controller determines a vector associated with the multipath error and corrects a depth measured by the controller based on the vector and uses iterative approximations to determine the vector.
In general, according to one aspect, the invention features a time of flight three dimensional imaging method. The method comprises generating modulated light that is intensity modulated at a first frequency and at a second frequency, detecting the modulated light at the first frequency and at the second frequency, generating a three dimensional image from the detected modulated light, and compensating the three dimensional image for multipath error.
The above and other features of the invention including various novel details of construction and combinations of parts, and other advantages, will now be more particularly described with reference to the accompanying drawings and pointed out in the claims. It will be understood that the particular method and device embodying the invention are shown by way of illustration and not as a limitation of the invention. The principles and features of this invention may be employed in various and numerous embodiments without departing from the scope of the invention.
In the accompanying drawings, reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale; emphasis has instead been placed upon illustrating the principles of the invention. Of the drawings:
Intensity modulated illumination light ML1A at a first modulation frequency from an illumination module or light source IM is sent to the object OB of a scene. A fraction of the total optical power sent out is reflected to the camera 100 and detected by the 3D imaging sensor 200 as reflected light ML2A.
Each pixel 101 of the sensor 200 is capable of demodulating the impinging light signal ML2 as described above.
A controller C regulates the timing of the camera 100 so that the demodulation is synchronous with the modulation of light ML1A of the light source IM. The phase values of all pixels correspond to the particular distance information of the corresponding point in the scene. The two-dimension gray scale image with the distance information is converted into a three-dimensional image by the controller C. This is displayed to a user via display M or used as a machine vision input.
The distance R for each pixel is calculated by
R=(c*TOF)/2,
with c as light velocity and TOF corresponding to the time-of-flight. Continuously intensity-modulated light is sent out by the illumination module or light source IM, reflected by the object and detected by the sensor 200. With each pixel 101 of the sensor 200 being capable of demodulating the optical signal at the same time, the sensor is able to deliver 3D images in real-time, i.e., frame rates of up to 30 Hertz (Hz), or even more, are possible. Continuous sine modulation delivers the phase delay (P) between the emitted signal and the received signal, also corresponding directly to the distance R:
R=(P*c)/(4*pi*fmod),
where fmod is the modulation frequency of the optical signal ML1A generated by light source IM. Typical state-of-the-art modulation frequencies range from a few MHz up to a few hundreds of MHz or even GHz.
In
The electronic timing circuit or controller C, employing for example a field programmable gate array (FPGA), generates the signals for the synchronous channel activation in the demodulation stage.
Using these four samples, the three modulation parameters amplitude A, offset B and phase shift P of the modulation signal can be extracted by the equations
A=sqrt[(A3−A1)^2+(A2−A0)^2]/2
B=[A0+A1+A2+A3]/4
P=arc tan [(A3−A1)/(A0−A2)]
The distance measurement scheme is based on the assumption that the modulated illumination travels directly from the illumination LEDs IM to the object and back to the sensor 200 of the camera, so that the total distance traveled by the light is twice the distance from the camera to the object. However, it is possible that objects may be arranged in the scene such that light takes a less direct path than this.
For example, the light ML1A from the illumination module IM may be reflected by a first object before being reflected by the measured object and finally return to the camera sensor 200. In this situation the light travels by the direct and also indirect paths. The apparent distance is then a weighted average of the path distances, weighted by the strength of signal returned via each path. The end result is that distance measurements are wrong.
Another common situation of multipath appears when measuring objects that have concave structures. A good example is when measuring a scene with a corner between two walls as illustrated in
For visualization purposes, the following description uses two modulation frequencies ML1A, ML1B for modulating the light source IM and detection by the sensor 200, see
In some embodiments, the two modulation frequencies ML1A, ML1B are generated by the light source IM serially in time. In other embodiments, the two modulation frequencies ML1A, ML1B are generated by the light source IM simultaneously at two different wavelengths. In this later example, the sensor 200 comprises a wavelength discriminating sensor that can separately detect the two different wavelengths. One example is a sensor 200 with two different sensor pixel arrays and two bandpass filters. One of the bandpass filters passes the wavelength of the first modulation frequency to the first sensor pixel array of the sensor 200; and the other of the bandpass filters passes the wavelength of the second modulation frequency to the second sensor pixel array of the sensor 200.
In the absence of any multi-paths, the measured phase of a target at a range of e.g. 2 meters (m) needs to be:
Where Rtarget is the range of the target and Rmax corresponds to the non-ambiguity range, which is:
Rmax: non-ambiguity range
c: speed of light
Fmod: modulation frequency
In the case of a camera modulating at 15 MHz, the non-ambiguity range becomes ˜10 m, at a modulation frequency of 30 MHz corresponds to a 5 m non-ambiguity range.
In any measurement of a target smaller than 5 m, the phase measured by the 30 MHz camera has to be two times the phase measured with the 15 MHz.
In 3D TOF systems, the phase is typically reconstructed based on four samples on the impinging sine, A0°, A90°, A180° and A270°. The following equation is used to calculate the phase:
In case of the target being at 2 m, the phases are:
φ30 MHz=144°
φ15 MHz=72°
Assuming now we have a close object generating stray light, a disturbing stray light vector has to be added to the vector generated by the target. The resulting vector therefore includes the error that is measured without compensating for any multi-path.
The error caused by the indirect measurement (multi-path) depends on its phase and its amplitude with respect to the phase and amplitude of the direct measurement.
In an analytical form, the measured vector can be described as:
{right arrow over (V)}measured={right arrow over (V)}direct+{right arrow over (V)}indirect
Furthermore, we know that looking at the direct measurement only,
φ30 MHZ, direct=2*φ15 MHZ, direct
In the case that φ30 is not within a certain phase noise interval around 2*φ15, the controller C assumes the presence of multi-path in the measurement. The phase noise interval can be determined by the estimated noise level on the measured range value.
That means:
While xmeas and ymeas are known, xindirect and yindirect are derived from the indirect vector as:
xindirect,30=Aindirect,30*cos(φindirect,30)
yindirect,30=Aindirect,30*sin(φindirect,30)
xindirect,15=Aindirect,15*cos(φindirect,15)
yindirect,15=Aindirect,15*cos(φindirect,15)
Since the indirect path is the same for the 30 MHz and the 15 MHz measurements, therefore:
φindirect,30=2*φindirect,15
This means, we have the following remaining unknowns:
Concerning the amplitudes, it can be further assumed that:
This assumption is appropriate since both amplitudes derive from measurements of the same objects. The ratio k of the direct amplitudes is generally known and constant for a 3D TOF system.
In this example, the controller C assumes that both amplitudes are the same, means the ratio k=1.
The result is the following equation:
arc tan 2(ymeas,30−Aindirect,30*sin(2*φindirect,15);xmeas,30−Aindirect,30*cos(2*φindirect,15))=2*arc tan 2(ymeas,15−Aindirect,30*sin(φindirect,15);xmeas,15−Aindirect,30*sin(φindirect,15))
The last two unknowns are:
Based on iterative approximations methods, these two unknown are found or at least estimated by the controller C. The indirect vector can therefore be determined and the measurement compensated by the controller C and the compensated image displayed on the monitor M.
In another embodiment, the basic vector equation of the multi-path problems is recognized as:
{right arrow over (V)}measured={right arrow over (V)}direct+{right arrow over (V)}indirect
The measurement is compensated by the controller C by optimizing the vectors in such a way as to best possibly fulfill the following the restrictions of the direct and the indirect path:
Direct path:
Direct path:
While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This application claims the benefit under 35 USC 119(e) of U.S. Provisional Application No. 61/367,091, filed on Jul. 23, 2010, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
7259715 | Garren | Aug 2007 | B1 |
7405812 | Bamji | Jul 2008 | B1 |
7872583 | Yushkov et al. | Jan 2011 | B1 |
8195425 | Borgaonkar | Jun 2012 | B2 |
8406280 | Draganov | Mar 2013 | B2 |
8577089 | Choi et al. | Nov 2013 | B2 |
20010021011 | Ono | Sep 2001 | A1 |
20060033929 | Towers et al. | Feb 2006 | A1 |
20060241371 | Rafii | Oct 2006 | A1 |
20060274818 | Gilmour | Dec 2006 | A1 |
20100002950 | Arieli | Jan 2010 | A1 |
20100128109 | Banks | May 2010 | A1 |
20100270376 | McQueen | Oct 2010 | A1 |
20110176709 | Park | Jul 2011 | A1 |
20110188028 | Hui | Aug 2011 | A1 |
20110199621 | Robinson | Aug 2011 | A1 |
20120001795 | DeLaurentis | Jan 2012 | A1 |
20120008128 | Bamji | Jan 2012 | A1 |
Entry |
---|
B. Buttgen , H. A. El Mechat , F. Lustenberger and P. Seitz “Pseudonoise optical modulation for real-time 3-D imaging with minimum interference”, IEEE Trans. Circuits Syst., vol. 54, No. 10, pp. 2109-2119 2007. |
A. D. Payne, A. P. P. Jongenelen, A. A. Dorrington, M. J. Cree, and D. A. Carnegie, “Multiple frequency range imaging to remove measurment ambiguity,” in 9th Conference on Optical 3-D Measurement Techniques, 2009. |
Pascazio, V.; Schirinzi, G., “Multifrequency InSAR height reconstruction through maximum likelihood estimation of local planes parameters,” Image Processing, IEEE Transactions on , vol. 11, No. 12, pp. 1478,1489, Dec. 2002 doi: 10.1109/TIP.2002.804274. |
Wei Xu; Ee-Chien Chang; Leong Keong Kwoh; Hock Lim; Wang Cheng; Heng, A, “Phase-unwrapping of SAR interferogram with multi-frequency or multi-baseline,” Geoscience and Remote Sensing Symposium, 1994. IGARSS '94. Surface and Atmospheric Remote Sensing: Technologies, Data Analysis and Interpretation., International , vol. 2, No., pp. 730,732 vol. 2, 8-1. |
Kavli, Tom, et al. “Modelling and compensating measurement errors caused by scattering in time-of-flight cameras.” Optical Engineering+ Applications. International Society for Optics and Photonics, 2008. |
Falie, D. “3D image correction for time of flight (ToF) cameras.” International Conference of Optical Instrument and Technology. International Society for Optics and Photonics, 2008. APA. |
Dorrington, A.A. et al., “Separating true range measurements from multi-path and scattering interference in commercial range cameras.” Proc. SPIE 7864, Three-Dimensional Imaging, Interaction, and Measurement, 786404 (Jan. 27, 2011). |
Number | Date | Country | |
---|---|---|---|
20120033045 A1 | Feb 2012 | US |
Number | Date | Country | |
---|---|---|---|
61367091 | Jul 2010 | US |