ACQUISITION OF DISTANCES FROM A SENSOR TO A SCENE

Information

  • Patent Application
  • 20240045061
  • Publication Number
    20240045061
  • Date Filed
    July 19, 2023
    9 months ago
  • Date Published
    February 08, 2024
    2 months ago
Abstract
The present description concerns a method of acquisition of distances from a sensor to a scene, comprising a number N of consecutive capture sub-phases Ci, with N an integer greater than or equal to 2 and i an integer index ranging from 1 to N, each sub-phase Ci comprising: supplying a laser beam having an optical frequency (f) linearly varying over a frequency range of width Bi for a time period Ti; delivering, from the laser beam, a reference beam and a useful beam; and illuminating the scene with the useful beam and illuminating at least one pixel row with a superposition of the reference beam and of a reflected beam. An absolute value of a ratio Bi/Ti is different for each capture sub-phase Ci.
Description
FIELD

The present disclosure generally concerns electronic circuits, and more particularly distance sensors, for example used to obtain a depth map of a scene, that is, for each pixel of the sensor, a distance from this pixel to a point in the scene corresponding to the pixel.


BACKGROUND

Sensors for obtaining a depth map of a scene, that is, a three-dimensional image of the scene, are known.


Among these known sensors, sensors operating according to the LIDAR (“Laser Imaging Detection and Ranging”) technique of FMCW (“Frequency Modulated Continuous Wave”) type can be distinguished.



FIG. 1 schematically illustrates a sensor 1 implementing the principle of the FMCW-type LIDAR technique. More detailed examples of sensors using the FMCW-type LIDAR technique may be found in literature, for example, in patent application FR3106417.


Sensor 1 comprises a source 100 of a laser beam 102.


Sensor 1 comprises an optical device 104 configured to deliver, from laser beam 102, a useful laser beam 106 and a reference laser beam 108. Laser 106 for example corresponds to a portion of beam 102, beam 108 for example corresponding to the other portion of beam 102.


Useful beam 106 is emitted towards a scene 110 to be imaged. In other words, beam 106 is used to illuminate scene 110. The reflection of beam 106 by scene 110 results in a reflected beam 112 which propagates from scene 110 to sensor 1.


Sensor 1 comprises an optical device 114 configured to superpose, or combine, reference beam 108 with the reflected beam 112. Thus, device 114 receives the two beams 108 and 112.


A beam 116 resulting from the combination of beams 108 and 112 is supplied by device 114 to at least one pixel Pix of sensor 1. Due to the fact that beam 102 is a coherent light beam, beam 108 is used as an amplifier of the reflected beam. In FIG. 1, a single pixel Pix is shown, although sensor 1 may in practice comprise a large number thereof, for example, more than 100,000, or even more than 300,000.


Pixel Pix comprises a photodetector PD, for example, a photodiode. Pixel Pix is configured so that its photodetector PD supplies a heterodyne signal iPD, for example, a photocurrent, having its amplitude depending on the intensity of the received bema 116.


In LIDAR techniques of FMCW type, source 100 is controlled by sensor 1, for example, by a control circuit 118 of sensor 1, to modulate the optical frequency f of laser beam 102. More particularly, source 100 is controlled, or configured, so that the frequency f of beam 102 is modulated over a frequency range of width or excursion B for a time period T. In other words, source 100 is configured, during a phase of capture of scene 110, so that the optical frequency f of beam 102 varies linearly during time period T, from a first frequency to a second frequency separated from the first frequency by value B. Still in other words, T is the duration of the continuous modulation of the optical frequency f of beam 102, and B is the excursion or the amplitude of this modulation (also called chirp).



FIG. 2 schematically shows the principle of this frequency modulation.


More particularly, a straight line 200 shows the variation of the optical frequency f of beam 102 during time period T. The amplitude of the modulation of frequency f during time period T is B.


Reference beam 108 originating from beam 102, its optical frequency is modulated like that of beam 102. Straight line 200 thus also represents the variation of the optical frequency of beam 108 during time period T.


Similarly, beam 106 also originating from beam 102, its optical frequency is modulated like that of beam 102, whereby the reflected beam 112 also has its optical frequency modulated like that of beam 102. However, as compared with reference beam 108, beam 112 has traveled twice the distance z from sensor 1 to scene 110. Thus, when the frequency f of beam 108 has a given value, the beam 112 received by sensor 1 is at this given frequency f with a delay Δt determined by distance z as shown by a straight line 202 of FIG. 2 (in dotted lines in FIG. 2).


The superposition, by component 114, of the reflected beam 112 with reference beam 108 results in interferences in beam 116, which generate beats at a frequency FR depending on delay Δt, and thus on distance z. These beats at frequency FR can be found in signal iPD. FIG. 3 illustrates the beats at frequency FR of heterodyne signal iPD.


More particularly, frequency FR is determined by the following formula: FR=(2*B*z)/(c*T), with * the multiply operator, B the excursion of the modulation of the optical frequency f of beam 102 during time period T, T the duration of the frequency modulation, c the speed of light, and z the distance from sensor 1 to the scene, and more particularly from the concerned pixel Pix to the scene. Thus, it is sufficient to measure the frequency FR of the heterodyne signal iPD of a pixel Pix of sensor 1 to know the distance z separating this pixel Pix from the point in the scene which is associated with this pixel Pix.


This measurement of beat frequency FR may be performed by fast Fourier transform (FFT). The FFT measurement method is however not adapted to sensors comprising a large number of pixels, for example, more than 100,000 pixels, or even more than 300,000 pixels, where the measurement of frequency FR must be implemented simultaneously for all the pixels of the sensor in snapshot mode or for all the pixels of a row of the array of pixels in rolling mode, if a rate of acquisition of images of the scene of at least 30 images per second is targeted.


The measurement of beat frequency FR may also be performed by counting the number M or periods Te of the heterodyne signal over a given time period, for example, the duration T of the modulation of the frequency f of beam 102. In this case, it can be considered that frequency FR is equal to M/T, neglecting the uncertainty on the counted number M and neglecting the optical path traveled in sensor 1 by reference beam 108 with respect to beam 106, 112, and thus, that z is equal to (M*c)/(2*B). The resolution for z, noted ∂z, is then equal to c/(2*B). This method of measurement of frequency FR by counting is simple to implement and enables to obtain a measurement of frequency FR faster than with the FFT method. However, it is desirable for the signal-to-noise ratio SNR to be as high as possible to avoid counting errors.


SUMMARY

There is a need to overcome all or part of the disadvantages of known methods of acquisition of the distances from a sensor to a scene, in particular of known methods based on the LIDAR technique of FCMW type.


An embodiment overcomes all or part of the disadvantages of known methods of acquisition of the distances from a sensor to a scene, in particular of known methods based on the LIDAR technique of FCMW type.


An embodiment provides a method of acquisition of distances from a sensor to a scene, the method comprising, during a phase of capture of the scene, a number N of consecutive capture sub-phases Ci, with N an integer greater than or equal to 2 and i an integer index ranging from 1 to N, each of the capture sub-phases Ci comprising:

    • the supplying of a laser beam having an optical frequency linearly varying over a frequency range of width Bi for a time period Ti;
    • the supplying based on said laser beam of a reference beam and of a useful beam; and
    • the illumination of the scene by the useful beam and the illumination of at least one row of pixels of the sensor by a beam corresponding to a superposition of the reference beam and of a reflected beam corresponding to the reflection of the useful beam by the scene,
    • wherein an absolute value of a ratio Bi/Ti is different for each capture sub-phase Ci,
    • wherein each capture sub-phase Ci corresponds to a range Dzi of measurement of distances from the sensor to the scene, range Dzi ranging from zmini to zmaxi with zmaxi greater than zmini, ratios Bi/Ti being determined so that for i ranging from 1 to N−1, zmin1+i is substantially equal to zmaxi without being greater than zmaxi.


According to an embodiment, ratios Bi/Ti are determined so that for i ranging from 1 to N−1, zmin1+i is equal to zmaxi.


According to an embodiment, for each measurement sub-phase Ci and for each pixel of the sensor, the illumination of the pixel by the beam corresponding to the superposition of the reference beam and of the reflected beam results in a signal oscillating at a beat frequency FRi belonging to a frequency range ΔFRi ranging from a frequency FRinfi to a frequency FRsupi if a point in the scene associated with said pixel is at a distance from the pixel within range Dzi.


According to an embodiment, for i ranging from 1 to N, FRsupi is equal to Ki times FRinfi, with Ki a coefficient, and frequency FRinfi is identical for all indexes i in the range from 1 to N.


According to an embodiment, Ki is identical for all indexes i in the range from 1 to N.


According to an embodiment, for each capture sub-phase Ci and each pixel of the sensor, if the beat frequency FRi is within frequency range ΔFRi, a distance z from the pixel to the point in the scene associated with the pixel is calculated based on the following formula: z=(c·Ti·FRi)/(2·Bi), with c the speed of light.


According to an embodiment, for each pixel and at each capture sub-phase Ci, a measurement of the frequency FRi of a pixel is obtained by counting, during the duration Ti of said sub-phase Ci, a number of periods of the oscillating signal of said pixel.


According to an embodiment, for each pixel and for each capture sub-phase Ci, the pixel is at a distance from the point in the scene associated with this pixel within measurement range Dzi if the number of periods counted during the duration Ti of sub-phase Ci belongs to a range of values ranging from a low value Mmini to a high value Mmaxi, the low value being equal to Ti*FRinfi and the high value being equal to Ti*FRsupi.


According to an embodiment, for i ranging from 1 to N, each range Dzi has a width equal to a targeted distance measurement resolution.


According to an embodiment, for i ranging from 1 to N, each range Dzi has a width equal to a targeted distance measurement resolution, and, for each pixel and for each capture sub-phase Ci, the pixel is at a distance from the point in the scene associated with this pixel within measurement range Dzi if the number of periods counted during the duration Ti of sub-phase Ci is equal to a number determined by this targeted resolution.


According to an embodiment, each range Dzi has a width equal to a targeted distance measurement resolution, and, for each pixel and for each capture sub-phase Ci, a determination that beat frequency FRi is within frequency range ΔFRi is performed by detecting a given frequency of range ΔFRi.


According to an embodiment, for i ranging from 1 to N, Ti is equal to T/N with T a duration of a phase of simultaneous acquisition by all the sensor pixels, or of a phase of acquisition by a single pixel row of a pixel array of the sensor.


According to an embodiment, for each capture sub-phase Ci, the optical frequency of the laser beam varies from fstarti to fendi, for i ranging from 1 to N−1, fendi is equal to fstarti+1 and a sign of coefficient Bi/Ti changes at each passage from a current capture sub-phase Ci to a next capture sub-phase Ci.


An embodiment provides a sensor configured to implement the above method, the sensor comprising:

    • an array of pixels,
    • a source of a laser beam,
    • an optical device configured to supply a reference beam and a useful beam intended to illuminate a scene to be captured,
    • an optical device configured to simultaneously supply at least one pixel row with a beam corresponding to a superposition of the reference beam and of a beam reflected by the scene when it is illuminated by the useful beam, and
    • a circuit for controlling the source, configured to modulate an optical frequency of the laser beam supplied by the source so that at each capture sub-phase Ci, the optical frequency of the beam varies linearly over the frequency range of width Bi during time period Ti.


An embodiment provides a sensor comprising:

    • an array of pixels,
    • a source of a laser beam,
    • an optical device configured to supply a reference beam and a useful beam intended to illuminate a scene to be captured,
    • an optical device configured to simultaneously supply all the pixels with a beam corresponding to a superposition of the reference beam and of a beam reflected by the scene when it is illuminated by the useful beam; and
    • a circuit for controlling the source, configured to modulate an optical frequency of the laser beam supplied by the source so that at each capture sub-phase Ci, the optical frequency of the beam varies linearly over the frequency range of width Bi during time period Ti;
    • the sensor being configured to implement the above-described method where each range Dzi has a width equal to a targeted distance measurement resolution, and, for each pixel and for each capture sub-phase Ci, a determination that the beat frequency FRi is within frequency range ΔFRi is performed by detecting a given frequency of range ΔFRi,
    • the sensor comprising an event management circuit, and each pixel comprising a circuit configured to detect the given frequency and a circuit configured to supply at least one event signal to the event management circuit if, during a sub-phase Ci, the given frequency is detected.


Another embodiment provides a sensor comprising:

    • an array of pixels;
    • a source of a laser beam;
    • an optical device configured to supply a reference beam and a useful beam intended to illuminate a scene to be captured;
    • an optical device configured to simultaneously supply all the pixels with a beam corresponding to a superposition of the reference beam and of a beam reflected by the scene when it is illuminated by the useful beam; and
    • a circuit for controlling the source, configured to modulate an optical frequency of the laser beam supplied by the source so that at each capture sub-phase Ci, the optical frequency of the beam varies linearly over the frequency range of width Bi during time period Ti;
    • the sensor being configured to implement the above-described method wherein for i ranging from 1 to N, each range Dzi has a width equal to a targeted distance measurement resolution, and, for each pixel and for each capture sub-phase Ci, the pixel is at a distance from the point in the scene associated with this pixel within measurement range Dzi if the number of periods counted during the duration Ti of sub-phase Ci is equal to a number determined by this targeted resolution, the sensor comprising an event management circuit, and
    • each pixel comprising a circuit configured to supply at least one event signal to the event management circuit if, during a sub-phase Ci, the number of periods counted during the duration Ti of sub-phase Ci is equal to the number determined by the targeted resolution.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features and advantages, as well as others, will be described in detail in the rest of the disclosure of specific embodiments given by way of illustration and not limitation with reference to the accompanying drawings, in which:



FIG. 1, previously described, schematically illustrates an example of a sensor using the LIDAR technology of FCMW type;



FIG. 2, previously described, illustrates the modulation of the optical frequency of a reference laser beam and of a reflected laser beam in the sensor of FIG. 1;



FIG. 3, previously described, illustrates beats of the heterodyne signal obtained by superposing the reference and reflected beams in the sensor of FIG. 1;



FIG. 4 illustrates with curves an embodiment of a method of acquisition of the distances from a sensor to a scene based on the FCMW LIDAR technique;



FIG. 5 illustrates with curves an alternative embodiment of a method of acquisition of the distances from a sensor to a scene based on the FCMW LIDAR technique;



FIG. 6 schematically shows an embodiment of a sensor implementing the method of FIG. 4 or of FIG. 5; and



FIG. 7 shows another embodiment of a sensor implementing the method of FIG. 4 or of FIG. 5.





DETAILED DESCRIPTION OF THE PRESENT EMBODIMENTS

Like features have been designated by like references in the various figures. In particular, the structural and/or functional features that are common among the various embodiments may have the same references and may dispose identical structural, dimensional and material properties.


For the sake of clarity, only the steps and elements that are useful for an understanding of the embodiments described herein have been illustrated and described in detail. In particular, known pixels of known sensors allowing the implementation of a method of acquisition of distances from a sensor to a scene have not been detailed, the described embodiments and variants being compatible with these known pixels and sensors.


Unless indicated otherwise, when reference is made to two elements connected together, this signifies a direct connection without any intermediate elements other than conductors, and when reference is made to two elements coupled together, this signifies that these two elements can be connected or they can be coupled via one or more other elements.


In the following disclosure, when reference is made to absolute positional qualifiers, such as the terms “front”, “back”, “top”, “bottom”, “left”, “right”, etc., or to relative positional qualifiers, such as the terms “above”, “below”, “upper”, “lower”, etc., or to qualifiers of orientation, such as “horizontal”, “vertical”, etc., reference is made, unless specified otherwise, to the orientation of the figures.


Unless specified otherwise, the expressions “around”, “approximately”, “substantially” and “in the order of” signify within 10%, and preferably within 5%.


There has been previously described a sensor 1 where, for each pixel Pix of sensor 1, the frequency FR of the heterodyne signal of pixel Pix is measured by counting the number M of periods Te of the signal for a given time period, for example, duration T of modulation of the optical frequency f of beam 102.


In a known sensor 1, the excursion B of the frequency modulation and the duration T of this modulation are fixed and constant. This implies that, to detect a distance z from a pixel Pix to an associated point in the scene which is between a minimum value zmin and a maximum value zmax, frequency FR has to be measurable over the entire extension of a range ΔFR from a minimum beat frequency FRmin determined by value zmin to a maximum beat frequency FRmax determined by value zmax. ΔFR thus is the bandwidth of the signal to be measured. Bandwidth ΔFR is equal to (2*(zmax-zmin)*B)/(c*T).


When the distance range to be measured or detected increases, bandwidth ΔFR also increases. The increase of ΔFR implies increasing the bandwidth of the circuit(s) of amplification of signal iPD accordingly, which increases the noise or the power consumption of this or these circuit(s). The increase of bandwidth ΔFR further implies an increase of the photonic noise of the DC (“Direct Current”) component of signal iPD. The increase of the DC component of signal iPDresults in a decrease of the signal-to-noise ratio. Indeed, the ratio of the DC component iPDDC of signal iPD to the useful signal iPDAC of signal iPD may then exceed a factor 20. Now, neglecting the photonic noise of the useful signal iPDAC, signal-to-noise ratio SNR is equal to:







1


q

Δ


F
R








i
PDAC



i
PDDC







with q the charge of an electron.


As already previously indicated, the method of determining frequency FR by counting is sensitive to the signal-to-noise ratio, and a decrease of this ratio may result in erroneous countings due to the noise, and thus to erroneous values of M.


To decrease bandwidth ΔFR so as to increase the signal-to-noise ratio, while keeping the same dynamic range Δz=zmax-zmin for the measurement of z, it is here provided, during a phase of capture of a scene, for example, by sensor 1, to divide the acquisition duration T into N consecutive time intervals Ti, with i an integer index ranging from 1 to N and N an integer greater than 2. Each capture interval or sub-phase Ci corresponds to the delivery of a beam 102 having its optical frequency f continuously and linearly modulated over a frequency range of width Bi for the duration Ti of this sub-phase. In other words, the N sub-phases Ci are consecutive and, during each sub-phase Ci, source 100 is controlled so that the optical frequency f of beam 102 is continuously and linearly modulated with a frequency excursion Bi during time period Ti. Further, it is provided for each sub-phase Ci to correspond to a ratio Bi/Ti having an absolute value different from that of the ratios Bi/Ti of the N−1 other sub-phases Ci.


Thereby, it can be provided for each sub-phase Ci to correspond to a bandwidth ΔFRi decreased with respect to bandwidth ΔFR and, further, for each sub-phase Ci to enable to detect, or measure, distances z comprised within a corresponding range Dzi ranging from a minimum value zmini to a maximum value zmaxi. The bandwidth ΔFRi of each of sub-phases Ci extends from a minimum frequency FRinfi to a maximum frequency FRsupi.


For example, for each sub-phase Ci, FRsupi is equal to Ki times FRinfi. Preferably, Ki has the same value for all sub-phases Ci. However, in other examples, the values Ki of at least two sub-phases Ci may be different.


As an example, frequency FRsupi has the same value for all sub-phases Ci or frequency FRinfi has the same value for all sub-phases Ci. Preferably, frequency FRsupi has the same value for all sub-phases Ci and frequency FRinfi has the same value for all sub-phases Ci, or, in other words, all sub-phases Ci have the same bandwidth ΔFRi, and thus the same coefficient Ki.


The range Dzi of each sub-phase Ci is different from that of the other sub-phases Ci, so that, by placing the N ranges Dzi end to end, sensor 1 is capable of detecting the distances z between zmin and zmax. According to an embodiment, ratios Bi/Ti are determined so that ranges Dzi can be placed end to end to obtain a dynamic range for z from zmin to zmax. In other words, ratios Bi/Ti are at least partly determined by the targeted measurement dynamic range zmax-zmin.


For example, according to an embodiment where sub-phases Ci are implemented by order of increasing index i, ratios Bi/Ti are determined so that, for i varying from 1 to N−1, value zmin1+i is equal to value zmaxi. In a variant, ranges Dzi may partially overlap, and, in this case, for i varying from 1 to N−1, zmin1+i is substantially equal to but not greater than value zmaxi. However, the embodiment where zmin1+i is equal to zmaxi has the advantage of not detecting or measuring a same distance value z in two different sub-phases Ci.


For a given sub-phase Ci and for a given pixel, a beat frequency FRi in the range from FRinfi to FRsupi can be observed if the point associated with the pixel is at a distance z from the pixel in the range from zmini to zmaxi. Further, distance z can then be calculated based on the following formula:

    • z=(c·Ti·FRi)/(2·Bi), where FRi is the measured beat frequency of the heterodyne signal of the pixel and is equal to M/Ti with M the counted number of periods of the heterodyne signal during time period Ti.


In sensor 1, when a phase of capture of a scene comprises N sub-phases Ci such as described in the present application, sensor 1 is, according to an embodiment, configured so that beam 116 simultaneously illuminates all the sensor pixels Pix. However, in alternative embodiments, when sensor 1 operates in rolling mode, the sensor may be configured so that beam 116 only illuminates the pixels Pix of the row for which the acquisition in going on.


An example of a method of calculation of ratios Bi/Ti will now be described.


In this example, it is considered that durations Ti are all identical and are for example equal to T/N. The frequency excursions Bi are thus different for each sub-phase Ci. Duration T corresponds, for example, to the duration T of continuous modulation of laser beam 102 over a frequency excursion B which would be necessary to measure distances z in the range from zmin to zmax.


In this example, it is further considered that, for i varying from 1 to N−1, zmaxi=zmini.i, to obtain a continuous range of measurable distances when ranges Dzi are placed end-to-end. In other words, (c·Ti·FRsupi)/(2·Bi)=(c·Ti±1·FRinfi+1)/(2·Bi±1). Since Ti is equal to T,±i, as a result, FRsupi/FRinfi+1=Bi/Bi.


As an example, by making the choice for FRinfi to be identical for each sub-phase Ci, and knowing that FRsupi is equal to Ki times FRinfi, one thus obtains Bi/B;±i=zmaxi/zmini=Ki.


It is then possible to calculate Bi, then B2 equal to Bi divided by Ki, then B3 equal to B2 divided by K2, and so on until BN and zmaxN are obtained, so that the total dynamic range of measurement of z equal to zmax/zmin is equal to zmaxN/zmini. Value N is then, for example, at least partly determined by the selection of coefficients Ki.


As a more specific example, in addition to the choice of a frequency FRinfi identical for each sub-phase Ci, Ki is selected to be identical and equal to K for all sub-phases Ci. In this case, sub-phases Ci all have the same frequency FRinfi, the same frequency FRsupi, and the same bandwidth ΔFRi. As a result, zmax/zmin=zmaxN/zmini=KN. N then is, for example, calculated by applying the base-K logarithmic function to the zmax/zmin dynamic range, N for example being equal to the rounded integer above the value obtained by applying the base-K logarithm to zmax/zmin.


It is thus possible, in this more specific example and using the equations given hereabove, to determine the N coefficients Bi.


For example, knowing zmin and zmax and setting the value of K, the number N of sub-phases is obtained, and then, knowing measurement time T, the duration Ti of each sub-phase Ci is obtained. By then setting frequency FRinfi, it is possible to calculate B1, knowing that B1=(FRinfi·c·Ti)/(2.zmin). As a variant, rather than setting frequency FRinfi, a minimum number Mmin of periods of the heterodyne signal to be detected in each sub-phase Ci is set so that the point associated with the pixel belongs to the measurement range Dzi of this sub-phase Ci, and it is then possible to calculate B1 knowing that B1=(Mmin·c)/(2.zmin). The other coefficients Bi are then, for example, calculated by means of the following equation: Bi=B1/K(i−1).



FIG. 4 illustrates an example of implementation in the case where N is equal to 4, Ti is identical for all sub-phases Ci, Ki is identical and equal to K for all sub-phases Ci, and ΔFRi is identical for all sub-phases Ci. In FIG. 4, the axis of abscissas represents time t, and the axis of ordinates represents the frequency f of laser beam 102. In other words, FIG. 4 illustrates a method of modulation of the optical frequency of the source 100 of sensor 1 for an acquisition of the distances from sensor 1 to the scene 110 to be imaged.


During the sub-phase C1 of duration Ti equal to T/4, frequency f is continuously and linearly modulated so that the excursion of the modulation is equal to B1.


During the next sub-phase C2 of duration T2 equal to T/4, frequency f is continuously and linearly modulated so that the excursion of the modulation is equal to B2, with B2=B1/K.


During the next sub-phase C3 of duration T3 equal to T/4, frequency f is continuously and linearly modulated so that the excursion of the modulation is equal to B3, with B3=B1/K2.


During the next sub-phase C4 of duration T4 equal to T/4, frequency f is continuously and linearly modulated so that the excursion of the modulation is equal to B4, with B4=B1/K3.


In the example of FIG. 4, at each sub-phase (or chirp) Ci, the optical frequency f of laser beam 102 is modulated from a same value fstart. This implies that, at the end of each sub-phase Ci and before the beginning of the next sub-phase Ci+1, frequency f has to be instantaneously taken back to frequency fstart, which strongly calls upon the response of source 100 and of its control circuit 118.


It is possible to implement sub-phases Ci while avoiding fast returns of frequency f to frequency fstart.


For this, it is sufficient for the optical frequency fendi of beam 102 at the end of each sub-phase Ci to be equal to the frequency fstarti+1 of beam 102 at the beginning of the next sub-phase Ci+1.


However, this may result in the optical frequency f of laser beam 102 going through a very large frequency range, which is not desirable, or even in source 100 not being capable of modulating frequency f over the entire desired range. However, in each sub-phase Ci, the frequency FRi measured for a distance z within range Dzi actually depends on the absolute value of ratio Bi/Ti. Advantageously, it is then possible, in addition to providing for the frequency fendi at the end of each sub-phase Ci to be equal to frequency fstarti+1 at the beginning of the next sub-phase Ci+1, to provide for the sign, or the polarity, of coefficient Bi to change at each beginning of a sub-phase Ci or, in other words, at each change of sub-phase Ci. In other words, considering that Bi is a frequency excursion, and thus always positive, this amounts to providing for this frequency excursion to be run through in one direction or in the other, by alternating the variation direction for each sub-phase Ci.



FIG. 5 illustrates the variation of the optical frequency f of beam 102 in an example of implementation where N is equal to 4, Ti is identical for all sub-phases Ci, Ki is identical and equal to K for all sub-phases Ci, and ΔFRi is identical for all sub-phases Ci. In FIG. 5, the frequency fendi at the end of each sub-phase Ci is equal to the frequency fstarti+1 at the beginning of the next sub-phase Ci+1, and the sign of coefficient Bi, or, in other words, the direction in which frequency excursion Bi is run through changes for each new sub-phase Ci.


During the sub-phase C1 of duration T1, frequency f is continuously and linearly modulated so that the excursion of the modulation is equal (in absolute value) to B1, and, more particularly, so that f varies linearly from fstart=fstarti to fendi. B1 is, in this example, positive or, in other words, the frequency excursion B1 is run through in the direction of increasing frequencies.


During the next sub-phase C2 of duration T2, frequency f is continuously and linearly modulated so that the excursion of the modulation is equal (in absolute value) to B2, and, more particularly, so that f varies linearly from fstart2=fendi to fend2. B2 is, in this example, negative, or, in other words, the frequency excursion B2 is run through in the direction of decreasing frequencies.


During the next sub-phase C3 of duration T3, frequency f is continuously and linearly modulated so that the excursion of the modulation is equal (in absolute value) to B3, and, more particularly, so that frequency f varies linearly from fstart3=fend2 to fend3. B3 is, in this example, positive or, in other words, frequency excursion B3 is run through in the direction of increasing frequencies.


During the next sub-phase C4 of duration T4, frequency f is continuously and linearly modulated so that the excursion of the modulation is equal (in absolute value) to B4, and, more particularly, so that frequency f varies linearly from fstart4=fend3 to fend4. B4 is, in this example, negative, or, in other words, the frequency excursion B4 is run through in the direction of decreasing frequencies


More generally, according to an embodiment, for i odd, fstarti=fstarti−1−Bi−1, and, for i even, fstarti=fstarti−1+Bi−1.


As an alternative example, not illustrated, coefficient B1 may be negative or, in other words, frequency excursion B1 may be run through in the direction of decreasing frequencies.


A specific numerical example will now be described. In this example, there is considered a case where:

    • duration T is equal to 200 μs,
    • the minimum distance zmin (equal to zmin1) to be detected is equal to 0.3 m,
    • the maximum distance zmax (equal to zmaxN) to be detected is equal to 10 m,
    • durations Ti are all identical,
    • coefficients Ki are all identical and equal to K=2, and
    • the frequencies FRinfi of sub-phases Ci are all equal to 75 KHz.


As a result:

    • dynamic range zmaxN/zmin1 is equal to 33.33,
    • N is equal to 5,
    • each duration Ti is equal to 200 μs divided by N, that is, 40 μs,
    • frequencies FRsupi are all equal to 150 KHz,
    • bandwidths ΔFRi are all equal to 75 KHz,
    • B1 is equal to 1.5*109 Hz (B1=(FRinf1*c*T1)/(2*zmin1)),
    • sub-phase C1 enables to detect the distances z in the range from zmin1=0.30 m to zmax1=0.60 m,
    • B2 is equal to 750*106 Hz (B2=B1/K),
    • sub-phase C2 enables to detect the distances z in the range from zmin2=0.60 m to zmax2=1.20 m,
    • B3 is equal to 375*106 Hz (B3=B1/K2),
    • sub-phase C3 enables to detect the distances z in the range from zmin3=1.20 m to zmax3=2.40 m,
    • B4 is equal to 187.5*106 Hz (B4=B1/K3),
    • sub-phase C4 enables to detect the distances z in the range from zmin4=2.40 m to zmax4=4.80 m, and
    • B5 is equal to 93.750*106 Hz (B4=B1/K4),
    • sub-phase C5 enables to detect the distances z in the range from zmin5=4.80 m to zmax5=9.60 m.


In the above example, the placing end to end of measurement ranges Dz1 to Dz5 does not exactly span the entire targeted measurement range from zmin to zmax since value N has been selected to be equal to the integer value just below the base-K logarithm of zmax/zmin. However, in another example where value N is selected to be equal to the integer value just above the base-K logarithm of zmax/zmin, the placing end to end of ranges Dzi to Dz5 spans the entire targeted measurement range from zmin to zmax, and even more.


If it had been desired to obtain the same z measurement range from zmin to zmax with a single continuous phase of modulation of the optical frequency f of laser 102 during time period T and with a minimum beat frequency FRmin equal to 75 KHz, this would have implied selecting a coefficient B equal to 7.5*109 Hz (N times greater than coefficient Bi). Such a value of coefficient B would have resulted in providing a maximum beat frequency FRmax equal to 2.5 MHz, which would have resulted in a bandwidth ΔFR=2.43 MHz, and thus in a signal-to-noise ratio approximately 5.69 times smaller than in the case of the previous example.


For each sub-phase Ci and each pixel of the sensor, the point associated with the pixel is at a distance z within the range Dzi of sub-phase Ci if the measured frequency FRi is between FRinfi and FRsupi, and thus if the number M of periods Te of the heterodyne signal of the pixel counted during duration Ti is in a range of values ranging from Mmini to Mmaxi, with Mmini=Ti*FRinfi and Mmaxi=Ti*FRsupi. When frequencies FRinfi are identical for all sub-phases Ci, FRsupi are identical for all sub-phases Ci, and durations Ti are identical for all sub-phases Ci, numbers Mmini and Mmaxi are identical for all sub-phases Ci and respectively equal to Mmin and Mmax. With the above specific numerical example, Mmin=3 and Mmax=6.


In the above example, it has been chosen to set value FRinfi rather than the minimum number Mmin of periods to be detected in each sub-phase Ci, although it would also have been possible to set value Mmin equal to 3 rather than value FRinfi. Taking the above example, and setting Mmin equal to 3, this implies that FRinfi=Mmin/Ti=3/(40*10-6)=75 KHz and the same results are thus obtained.


In each sub-phase Ci, the number M of periods of the heterodyne signal may be obtained by means of a counter which accumulates, or counts, the number of periods of the heterodyne signal for the duration Ti of sub-phase Ci. In this case, number M is an integer, and the uncertainty or error on number M is equal to more or less 1. As a result, the distance measurement resolution in each sub-range Ci is equal to ∂zi=c/(2*Bi).


Knowing that, in each sub-phase Ci, zmaxi=Ki*zmini, with Ki equal to K in all sub-ranges Ci, if it is desired for the extension of the measurement range Dzi of each sub-phase Ci to be equal to the resolution ∂zi of this sub-range, then zmini=c/(2*Bi*(K−1))=∂zi/(K−1), and thus ∂zi=zmini*(K−1). Now, zmini=(c*Mmin)/(2*Bi), whereby Mmin=1/(K−1) and Mmax=K/(K−1). It is then possible to select a resolution and to deduce therefrom the corresponding value K, and then the values Mmin and Mmax corresponding to this value of K.


For example, if a resolution ∂zi is targeted in each range Ci which is equal to 1% of the minimum value zmini of this range Ci, this implies that K−1=0.01, and thus that K=1.01, Mmin =1/(K−1)=100 and Mmax=K/(K−1)=101.


In the above example, the number M of periods is obtained by means of an integer counter, whereby the error on the value of number M is equal to plus or minus 1, and thus the resolution for z, ∂zi, is equal to c/(2*Bi). In other examples, number M may be obtained by means of a counter with a double time base enabling to measure a fractional portion of number M, which enables to decrease the error on the value of M, and thus to increase the resolution.


More generally, for an error E on the determination of number M by counting, the resolution for z, ∂zi, is equal to (E*c)/(2*Bi). By choosing for the extension of the measurement range Dzi of each sub-phase Ci to be equal to the resolution ∂zi of this sub-range (that is, ∂zi=(K-1)*zmini), then Mmin=E/(K−1) and Mmax=(E*K)/(K−1). Thus, as previously, by setting resolution ∂zi, and knowing error E, it is possible to deduce therefrom the corresponding value K, and then the values Mmin and Mmax corresponding to this value of K.


The above examples show that the lower the resolution ∂zi in each sub-range Ci, in percentage of the minimum value zmini detectable in this sub-range, the more the number N of sub-phases increases for a given measurement dynamic range zmax−zmin. Thus, small values of resolution ∂zi may result in a number N of sub-phases which is not compatible with an operation in rolling mode and a rate of acquisition of the scene compatible with a video application, that is, a rate of acquisition of the scene of at least 30 images of the scene per second. However, small resolution values ∂zi and the numbers N of sub-phases to which they correspond may remain compatible with an operation in snapshot mode. As a specific numerical example, there is considered a case where:

    • duration T is equal to 33 ms, so that the sensor can acquire 30.3 frames per second, which is compatible with a video application,
    • the minimum distance zmin (equal to zmin1) to be detected is equal to 0.3 m,
    • the maximum distance zmax (equal to zmaxN) to be detected is equal to 10 m,
    • durations Ti are all identical,
    • coefficients Ki are all identical and equal to K,
    • the frequencies FRinfi of sub-phases Ci are all equal, and
    • in each sub-range Ci, ∂zi is equal to 1% of zmini.


As a result:

    • dynamic range zmaxN/zmin1 is equal to 33.3,
    • K is equal to 1.01, which implies that Mmin=100 and Mmax=101,
    • N is equal to 352,
    • each duration Ti is equal to 93.75 μs,
    • frequencies FRinfi are all equal to Mmin/Ti=1.07 MHz,
    • frequencies FRsupi are all equal to Mmax/Ti=1.08 MHz,
    • bandwidths ΔFRi are all equal to 10.10 KHz,
    • B1 is equal to 50*109 Hz (B1=(FRinfl*c*T1)/(2*zmini)), the other coefficients Bi being equal to B1/Ki−1,
    • sub-phase C1 enables to detect the distances z in the range from zmini=0.30000 m to zmax1=0.30300 m,
    • sub-phase C2 enables to detect the distances z in the range from zmin2=0.30300 m to zmax2=0.30603 m,
    • sub-phase C3 enables to detect the distances z in the range from zmin3=0.30603 m to zmax3=0.30909 m,
    • sub-phase C351 enables to detect the distances z in the range from zmin351=9.76342 m to zmax351=9.86106 m, and
    • sub-phase C352 enables to detect the distances z in the range from zmin352=9.86106 m to zmax352 =9.95967 m.


If it had been desired to obtain the same z measurement range from zmin to zmax with a single continuous phase of modulation of the optical frequency f of laser 102 during time period T and with a minimum beat frequency FRmin equal to 1.07 MHz, this would have implied selecting a coefficient B equal to 17.6*1012 Hz (N times greater than the coefficient B1 of the above example). Such a value of coefficient B would have resulted in providing a maximum beat frequency FRmax equal to 35.56 MHz, which would have resulted in a bandwidth ΔFR=34.5 MHz, and thus in a signal-to-noise ratio approximately 56 times smaller than in the case of the previous example.


In the above examples where, in each sub-phase Ci, considering any pixel of the sensor, the range of values Dzi measurable by this pixel during sub-phase Ci is equal to resolution ∂zi, the beat frequency FRi to be measured in each sub-range Ci for a point in the scene associated with the pixel to be at a distance within range Dzi is almost constant, since the bandwidth is equal to the desired accuracy. It is thus sufficient to count Mmin periods of the heterodyne signal or to detect, by filtering of the heterodyne signal, a beat frequency between FRinfi and FRsupi, for example a frequency equal to (FRsupi+FRinfi)/2, to determine at what distance the object is located.


In examples where, for each sub-phase Ci, the determination that a point in the scene is at a distance z within range Dzi is implemented by detecting a single frequency between FRinfi and FRsupi, it is possible that, for a given pixel, this frequency is detected for at least two different sub-phases Ci, for example due to the noise present in the heterodyne signal, even filtered at the detection frequency. In this case, the signal level can enable to determine which of the sub-phases phases Ci corresponds to the range Dzi comprising the distance z between the pixel and its associated point, this sub-phase then being that for which the signal level is the highest.


Implementations where the range of values Dzi measurable during each sub-phase Ci is equal to resolution ∂zi are for example well adapted to an operation in snapshot mode of the sensor. Further, these implementations are for example well adapted to sensors with an architecture called “event-based”, where each pixel sends an event signal only when it has counted M=Mmin for the current sub-phase Ci, or only when it has detected for the current sub-phase Ci a given frequency between FRinfi and FRsupi in the heterodyne signal filtered at this given frequency.


Embodiments where durations Ti are all identical have been described hereabove.


In alternative embodiments, the excursions Bi are all identical and the durations Ti are different for each sub-phase Ci. The implementation of such variants is within the abilities of those skilled in the art by adapting the previously described calculations.


In still other alternative embodiments, durations Ti are fixed and excursions Bi are variable for some of sub-phases Ci, and durations Ti are variable and excursions Bi are fixed for the other sub-phases Ci. Here again, the implementation of these variants is within the abilities of those skilled in the art by adapting the previously described calculations.


The acquisition of the distances from sensor 1 to the scene to be imaged by the implementation of a plurality of sub-ranges Ci having different coefficients Bi/Ti may for example be implemented after a first acquisition of the scene to be imaged performed with a single B/T ratio, as described in relation with FIGS. 1, 2, and 3. Thus, during the first acquisition of the scene performed with the B/T ratio, a circuit of sensor 1, for example, a calculation and/or processing circuit, determines an adapted measurement dynamic range zmax-zmin and calculates coefficients Bi/Ti by taking into account the determined adapted dynamic range. Sensor 1 then implements a second acquisition of the scene comprising a plurality of sub-phases Ci determined by the coefficients Bi/Ti calculated by taking into account the adapted dynamic range.


More generally, coefficients Bi/Ti may be calculated during a design phase and recorded in the sensor to be used therein at each acquisition of a scene, or the sensor may comprise a calculation circuit configured to recalculate coefficients Bi/Ti at each modification of a parameter such as the targeted dynamic range zmax-zmin, the frequency FRinfi of sub-ranges Ci, number Mmin, etc.



FIG. 6 schematically shows an embodiment of a sensor 2 implementing the method of FIG. 4 or of FIG. 5.


Although this is not shown in FIG. 6, sensor 2 comprises, like the sensor 1 of FIG. 1, a source 100 of a laser beam 102, a circuit 118 for controlling source 100, that is, the optical frequency f of laser beam 102, and the optical devices 104 and 114 enabling to supply beams 106, 108, and 116 from beam 102 and the reflected beam 112.


Further, in FIG. 6, a single pixel Pix of sensor 2 is shown although, in practice, sensor 2 for example comprises a large number of pixels Pix, for example, at least 100,00 pixels Pix, pixels Pix then being arranged in an array comprising rows of pixels Pix and columns of pixels Pix.


According to an embodiment, during a phase of capture of a scene, sensor 2 is configured so that, at each sub-phase Ci, beam 116 simultaneously illuminates all the pixels Pix of sensor 2.


In the embodiment of FIG. 6, the architecture of pixel Pix is for example adapted to an operation of the sensor in rolling mode.


Pixel Pix comprises a photodetector PD configured to receive the portion of beam 116 (FIG. 1) corresponding to the point in the scene imaged by this pixel Pix, that is, the point in the scene associated with pixel Pix.


Photodetector PD is configured to supply heterodyne signal iPD.


According to an embodiment, pixel Pix comprises a circuit 600 (block AF in FIG. 6) configured to filter and amplify signal iPD, the bandwidth of circuit 600 then being greater than or equal to, preferably equal to, the largest of bandwidths ΔFRi, for example to any of bandwidths ΔFRi, when they are all identical. Circuit 600 receives signal iPD and supplies a signal IPDcorresponding to signal iPD filtered and amplified.


According to an embodiment, pixel Pix further comprises a comparator COMP configured to supply a binary signal COMPout at ‘1’ when signal IPD is greater than a value, and at ‘0’ otherwise. Thus, when analog signal IPD exhibits oscillations, binary signal COMPout oscillates at the same frequency.


Pixel Pix further comprises a row selection switch SEL. When switch SEL is on, in practice simultaneously for all the pixels Pix of a same row, the output signal of pixel Pix is supplied to a conductive row 602 common to all the pixels Pix of a same column. When switch SEL is off (row of pixels Pix deselected), conductive line 602 receives the output signal of a pixel of the same column but of another pixel row, that is, the selected row of pixels Pix.


In the embodiment of FIG. 6 where each pixel Pix comprises circuit 600 and circuit COMP, the output signal of pixel Pix is signal COMPout.


In each column, conductive line 602 is connected to a corresponding readout circuit 604, for example, arranged at the foot of a column. This circuit 604 receives the output signal of the pixel Pix of the column which has its switch SEL on. Circuit 604 is configured, at each sub-phase Ci, for example, at each time period Ti, to count the number M of periods of signal iPD of the pixel Pix coupled to row 602 by its switch SEL.


In the embodiment of FIG. 6 where each pixel Pix comprises a circuit 600 and a circuit COMP, according to an embodiment where the sensor operates in rolling mode, circuit 604 is configured to count, at each sub-phase Ci, for example, for each time period Ti, the number M of periods of signal COMPout that it receives.


As an example, circuit 604 then comprises a counter 606 (block “COUNTER” in FIG. 6), which receives the output signal of the selected pixel Pix. Circuit 606 is configured to increment at each pulse of the output signal during the duration Ti of each phase Ci. Counter 606 is further configured to reset at the beginning of each phase Ci.


Optionally, circuit 604 may further comprise a circuit 608 (block “REG” in FIG. 6) configured to store, at the end of each sub-phase Ci, the number M counted during this sub-phase Ci. As an example, circuit 608 is a register, for example, a shift register. Thereby, the reading of the numbers M counted by all the circuits 604 of the sensor during a sub-phase Ci and stored at the end of this sub-phase Ci may be read, for example, sequentially, during the next sub-phase Ci+1.


In an alternative embodiment, pixels Pix are deprived of circuits COMP but comprise circuits 600. In this case, the output signals of pixels Pix are signals IPD. Each circuit 604 then receives an output signal IPD of the pixel Pix selected from the column of circuit 604. Each circuit 604 then comprises a circuit COMP receiving the output signal IPD of pixel Pix and supplying the corresponding signal COMPout used by circuit 604, for example, by its counter 606, to count number M at each sub-phase Ci.


In still another alternative embodiment, pixels Pix are deprived of circuits COMP and of circuits 600. In this case, the output signals of pixels Pix are signals IPD. Each circuit 604 then receives the output signal iPD of the pixel Pix selected in the column of circuit 604. Each circuit 604 then comprises a circuit 600 receiving the output signal iPD of pixel Pix and supplying the corresponding signal IPD. Each circuit 604 further comprises a circuit COMP receiving the signal IPD supplied by the circuit 600 of circuit 604 and supplying the signal COMPout used by circuit 604, for example, by its counter 606, to count number M at each sub-phase Ci.


Although this is not illustrated in FIG. 6, sensor 2 may comprise one or a plurality of circuits configured to deactivate or turn off pixels Pix of sensor 2 which are not used, that is, which are not measured or, in other words, which are not implementing a measurement of distance z. For example, the pixels Pix of the non-selected rows may be deactivated or turned off to decrease the power consumption. As an alternative or complementary example, when at a sub-phase Ci, the numbers M counted for pixels Pix of the selected row indicate that these pixels are at distances z from their associated points which belong to the corresponding range Dzi, then these pixels Pix may be deactivated or turned off for the next sub-phases Ci.



FIG. 7 schematically shows another embodiment of a sensor 3 implementing the method of FIG. 4 or of FIG. 5.


Although this is not shown in FIG. 7, sensor 3 comprises, like the sensor 1 of FIG. 1 and the sensor 2 of FIG. 6, a source 100 of a laser beam 102, a circuit 118 for controlling source 100, that is, the optical frequency f of laser beam 102, and optical devices 104 and 114 enabling to supply beams 106, 108, and 116 from beam 102 and the reflected beam 112.


In FIG. 7, as in FIG. 6, a single pixel Pix of sensor 3 is shown although, in practice, sensor 3 for example comprises a large number of pixels Pix, for example, at least 100,000 pixels Pix, pixels Pix then being arranged in an array comprising rows of pixels Pix and columns of pixels Pix.


According to an embodiment, during a phase of capture of a scene, sensor 3 is configured so that, at each sub-phase Ci, beam 116 simultaneously illuminates all the pixels Pix of sensor 3.


Pixel Pix comprises a photodetector PD configured to receive the portion of beam 116 (FIG. 1) corresponding to the point in the scene imaged by this pixel Pix, that is, the point associated with pixel Pix.


Photodetector PD is configured to supply heterodyne signal iPD.


In the embodiment of FIG. 3, the architecture of pixel Pix is for example adapted to an operation of the sensor in snapshot mode. Further, in the example of FIG. 7, coefficients Bi/Ti have been calculated so that, for each sub-phase Ci, Dzi=∂zi.


According to an embodiment, pixel Pix comprises circuit 700 (block AF in FIG. 7) configured to filter and amplify signal iPD, the bandwidth of circuit 700 then being greater than or equal to, preferably equal to, the largest of bandwidths ΔFRi, for example, to any of bandwidths ΔFRi, when they are all identical. Circuit 700 receives signal iPD and supplies a signal IPDcorresponding to signal iPD filtered and amplified.


According to an embodiment, pixel Pix further comprises a comparator COMP configured to supply a binary signal COMPout at ‘1’ when signal IPD is greater than a value, and at ‘0’ otherwise. Thus, when analog signal IPD exhibits oscillations, binary signal COMPout oscillates at the same frequency.


In the embodiment of FIG. 7 where each pixel Pix comprises circuit 700 and circuit COMP, the output signal of pixel Pix is signal COMPout.


Sensor 3 further comprises, for each pixel Pix, a readout circuit 704 associated with this pixel Pix. Thus, sensor 3 comprises as many circuits 704 as pixels Pix.


According to an embodiment, the pixel array Pix of sensor 3 is implemented inside and of top of a first semiconductor layer, for example inside and on top of a first semiconductor substrate, and circuits 704 are implemented, for example in array form, inside and on top of a second semiconductor layer, for example, a semiconductor-on-insulator layer. The two semiconductor layers are each coated with a back-end-of-line (BOEL) interconnection structure, the two interconnection structures being assembled to each other, for example, by molecular bonding HB as illustrated in FIG. 7, to couple, for example, connect, each pixel Pix to its circuit 704. According to another embodiment, the pixels Pix and their readout circuits 704 are all implemented inside and on top of a same semiconductor layer.


Each circuit 704 receives the output signal of pixel Pix which is associated therewith. Each circuit 704 is configured, at each sub-phase Ci, for example for each time period Ti, to count the number M of periods of signal iPD of the pixel Pix with which it is associated, for example, by counting the number of periods of the output signal of pixel Pix. In this example where, at each phase Ci and for each pixel Pix, it is desired to determine whether the number M of periods of the heterodyne signal of pixel Pix is equal to Mmin, each circuit 704 is configured to detect, at each sub-phase Ci, whether the counted number M is equal to Mmin.


In the embodiment of FIG. 7 where each pixel Pix comprises a circuit 700 and a circuit COMP, each circuit 704 is configured to count, at each phase Ci, for example, at each duration Ti, the number M of periods of signal COMPout that it receives from its associated pixel Pix.


As an example, circuit 704 comprises a counter 706 (block “COUNTER M” in FIG. 7). Circuit 706 is configured for the number M of pulses of signal COMPout during the duration Ti of each sub-phase Ci, and to supply an output signal Det indicating when number M is equal to Mmin. For this purpose, circuit 704, and more particularly its circuit 706, for example comprise an input configured to receive the value of Mmin. Counter 706 is further configured to reset at the beginning of each sub-phase Ci.


To enable the reading of pixels Pix according to an event-based logic, each circuit 704 further comprises a circuit 708 (block “LOGIC” in FIG. 7). Circuit 708 is configured to receive signal Det, and to supply at least one event signal to a processing circuit of sensor 3 if, for the current phase Ci, the number M counted for the pixel Pix which is associated therewith is equal to Mmin. As an example, this event signal indicates to the processing circuit of sensor 3, also called event management circuit, the row and the column of the array to which pixel Pix belongs, that is, the position of pixel Pix.


As an example, at each sub-phase Ci, each circuit 708 is configured, when number M is equal to (or reaches) Mmin, to supply the event signal ReqC indicating the column to which the pixel Pix associated with circuit 708 belongs, and an event signal ReqL indicating the row to which the pixel Pix associated with this circuit 708 belongs. These signals are supplied to the event management circuit of sensor 3. For example, the event management circuit comprises a column event management circuit receiving signal ReqC, and a row event management circuit receiving signal ReqL.


As an example, the event management circuit is configured to send at least one acknowledgement signal to circuit 708 to indicate thereto that it has effectively received signals ReqC and ReqL. For example, the event management circuit is configured to send an acknowledgement signal AckC to circuit 708 to indicate thereto that it has effectively received signal ReqC, and to send acknowledgement signal AckL to circuit 708 to indicate thereto that it has effectively received signal ReqL. As an example, signal AckC is supplied by the column event management circuit, and signal AckL is supplied by the row event management circuit.


As a more specific example, for each pixel Pix, when pixel Pix detects that M=Mmin, the sequence of request and acknowledgement signals is the following:

    • sending of signal ReqC,
    • reception of the corresponding signal AckC,
    • sending of signal ReqL, and
    • reception of the corresponding signal AckL.


According to an embodiment, when a pixel Pix has received the two acknowledgement signals AckL and AckC, it may switch to a standby state that it will only leave at the beginning of the next capture phase. As an example, a pixel Pix in the standby state deactivates at least its circuit 708, or even all its circuits 700, COMP, and 704.


In an alternative embodiment, pixels Pix are deprived of circuits COMP but comprise circuits 700. In this case, the output signals of pixels Pix are signals IPD. Each circuit 704 then receives the output signal IPD of the corresponding pixel Pix. Each circuit 704 then comprises a circuit COMP receiving the output signal IPD of pixel Pix and supplying the corresponding signal COMPout used by circuit 704, for example, by its counter 706, to count number M at each phase Ci.


In still another alternative embodiment, pixels Pix are deprived of circuits COMP and of circuits 700. In this case, the output signals of pixels Pix are signals IPD. Each circuit 704 then receives the output signal iPD of the corresponding pixel Pix. Each circuit 704 then comprises a circuit 700 receiving the output signal iPD of pixel Pix and supplying the corresponding signal IPD. Each circuit 704 further comprises a circuit COMP receiving the signal IPD supplied by the circuit 700 of circuit 704 and supplying the signal COMPout used by circuit 704, for example, by its counter 706, to count number M at each sub-phase Ci.


As an example, an event-based reading of the pixels implies a classification of the pixels by increasing (or decreasing) order according to the detected distance. In the mentioned examples, short distances are explored first to end with long distance (the inverse is also possible). In such an example, the addition of a counter which counts the number of pixels read after each sub-phase Ci and of a circuit, for example, a register or a memory, storing the number of pixels counted at each sub-phase Ci, makes it possible to obtain a histogram of distances in real time. Indeed, the obtaining of this histogram does not require supplying the addresses of each pixel and the reading of the pixels can thus be performed more rapidly. The histogram thus obtained may be used to, for example, readjust the ramp sequence (that is, ratios Bi/Ti) to better target a distance range when it can be observed that the complete dynamic range is not used.


Similarly, once the N sub-phases Ci have been implemented, the ramp sequence may be adapted to target the measurement on an accurate distance, that is, by performing a new capture but only with a single sub-phase Ci corresponding to this accurate distance.


Various embodiments and variants have been described. Those skilled in the art will understand that certain features of these various embodiments and variants may be combined, and other variants will occur to those skilled in the art. In particular, although in most previously-described embodiments and variants, the counted number M of periods of the heterodyne signal of a pixel Pix is an integer number, those skilled in the art are capable of providing more accurate counters, for example, with a double time base, enabling to count not only a number of entire periods of the heterodyne signal for a given duration, but further, the fractional portion of the number of periods of the heterodyne signal during this determined duration.


Finally, the practical implementation of the described embodiments and variations is within the abilities of those skilled in the art based on the functional indications given hereabove.

Claims
  • 1. Method of acquisition of distances from a sensor to a scene, the method comprising, during a phase of capture of the scene, a number N of consecutive capture sub-phases Ci, with N an integer greater than or equal to 2 and i an integer index ranging from 1 to N, each of the capture sub-phases Ci comprising: the supplying of a laser beam having an optical frequency linearly varying over a frequency range of width Bi for a time period Ti;the supplying from said laser beam of a reference beam and of a useful beam; andthe illumination of the scene by the useful beam and the illumination of at least one row of pixels of the sensor by a beam corresponding to a superposition of the reference beam and of a reflected beam corresponding to the reflection of the useful beam by the scene,wherein an absolute value of a ratio Bi/Ti is different for each capture sub-phase Ci,wherein each capture sub-phase Ci corresponds to a range Dzi of measurement of distances from the sensor to the scene, range Dzi ranging from zmini to zmaxi with zmaxi greater than zmini, ratios Bi/Ti being determined so that for i varying from 1 to N−1, zmini+i is substantially equal to zmaxi without being greater than zmaxi.
  • 2. Method according to claim 1, where ratios Bi/Ti are determined so that for i ranging from 1 to N−1 zmini+1 is equal to zmaxi.
  • 3. Method according to claim 1, wherein, for each measurement sub-phase Ci and for each pixel of the sensor, the illumination of the pixel by the beam corresponding to the superposition of the reference beam and of the reflected beam results in a signal oscillating at a beat frequency FRi belonging to a range ΔFRi of frequencies ranging from a frequency FRinfi to a frequency FRsupi if a point in the scene associated with said pixel is at a distance from the pixel within range Dzi.
  • 4. Method according to claim 3, wherein, for i ranging from 1 to N, FRsupi is equal to Ki times FRinfi, with Ki a coefficient, and frequency FRinfi is identical for all indexes i in the range from 1 to N.
  • 5. Method according to claim 4, wherein Ki is identical for all indexes i in the range from 1 to N.
  • 6. Method according to claim 3, wherein for each capture sub-phase Ci and each pixel of the sensor, if the beat frequency FRi is within frequency range ΔFRi, a distance z from the pixel to the point in the scene associated with the pixel is calculated based on the following formula: z=(c·Ti·FRi)/(2·Bi), with c the speed of light.
  • 7. Method according to claim 3, wherein for each pixel and at each capture sub-phase Ci, a measurement of the frequency FRi of a pixel is obtained by counting, during the duration Ti of said sub-phase Ci, a number of periods of the oscillating signal of said pixel.
  • 8. Method according to claim 7, wherein, for each pixel and for each capture sub-phase Ci, the pixel is at a distance from the point in the scene associated with this pixel within measurement range Dzi if the number of periods counted during the duration Ti of sub-phase Ci belongs to a range of values ranging from a low value Mmini to a high value Mmaxi, the low value being equal to Ti*FRinfi and the high value being equal to Ti*FRsupi.
  • 9. Method according to claim 2, wherein, for i ranging from 1 to N, each range Dzi has a width equal to a targeted distance measurement resolution.
  • 10. Method according to claim 8, wherein, for i ranging from 1 to N, each range Dzi has a width equal to a targeted distance measurement resolution, and, for each pixel and for each capture sub-phase Ci, the pixel is at a distance from the point in the scene associated with this pixel within measurement range Dzi if the number of periods counted during the duration Ti of sub-phase Ci is equal to a number determined by this targeted resolution.
  • 11. Method according to claim 6, wherein each range Dzi has a width equal to a targeted distance measurement resolution, and, for each pixel and for each capture sub-phase Ci, a determination that the beat frequency FRi is within frequency range ΔFRi is performed by detecting a given frequency of range ΔFRi.
  • 12. Method according to claim 1, wherein, for i ranging from 1 to N, Ti is equal to T/N with T a duration of a phase of simultaneous acquisition by all the sensor pixels, or of a phase of acquisition by a single pixel row of a pixel array of the sensor.
  • 13. Method according to claim 1, wherein, for each capture sub-phase Ci, the optical frequency of the laser beam varies from fstarti to fendi, for i ranging from 1 to N−1, fendi equal to fstarti+1 and a sign of coefficient Bi/Ti changes at each passage from a current capture sub-phase Ci to a next capture sub-phase Ci.
  • 14. Sensor configured to implement the method according to claim 1, the sensor comprising: an array of pixels,a source of a laser beam,an optical device configured to supply a reference beam and a useful beam intended to illuminate a scene to be captured,an optical device configured to simultaneously supply at least one pixel row with a beam corresponding to a superposition of the reference beam and of a beam reflected by the scene when it is illuminated by the useful beam, anda circuit for controlling the source, configured to modulate an optical frequency of the laser beam supplied by the source so that at each capture sub-phase Ci, the optical frequency of the beam varies linearly over the frequency range of width Bi during time period Ti.
  • 15. Sensor comprising: an array of pixels;a source of a laser beam;an optical device configured to supply a reference beam and a useful beam intended to illuminate a scene to be captured;an optical device configured to simultaneously supply all the pixels with a beam corresponding to a superposition of the reference beam and of a beam reflected by the scene when it is illuminated by the useful beam; anda circuit for controlling the source, configured to modulate an optical frequency of the laser beam supplied by the source so that at each capture sub-phase Ci, the optical frequency of the beam varies linearly over the frequency range of width Bi during time period Ti;the sensor being configured to implement the method according to claim 11 and comprising an event management circuit, andeach pixel comprising a circuit configured to detect the given frequency and a circuit configured to deliver at least one event signal to the event management circuit if, during a sub-phase Ci, the given frequency is detected.
  • 16. Sensor comprising: an array of pixels;a source of a laser beam;an optical device configured to supply a reference beam and a useful beam intended to illuminate a scene to be captured;an optical device configured to simultaneously supply all the pixels with a beam corresponding to a superposition of the reference beam and of a beam reflected by the scene when it is illuminated by the useful beam; anda circuit for controlling the source, configured to modulate an optical frequency of the laser beam supplied by the source so that at each capture sub-phase Ci, the optical frequency of the beam varies linearly over the frequency range of width Bi during time period Ti;the sensor being configured to implement the method according to claim 10 and comprising an event management circuit, andeach pixel comprising a circuit configured to supply at least one event signal to the event management circuit if, during a sub-phase Ci, the number of periods counted during the duration Ti of sub-phase Ci is equal to the number determined by the targeted resolution.
Priority Claims (1)
Number Date Country Kind
2207829 Jul 2022 FR national