Device and Method for Detecting Objects in a Monitored Zone

Information

  • Patent Application
  • 20240027581
  • Publication Number
    20240027581
  • Date Filed
    July 19, 2023
    9 months ago
  • Date Published
    January 25, 2024
    3 months ago
Abstract
A device and a method for safeguarding a monitored zone by at least one FMCW LiDAR sensor for transmitting transmitted light beams into the monitored zone is provided. The FMCW LiDAR sensor scans a plurality of measurement points in the monitored zone and generates measurement data from transmitted light remitted or reflected by the measurement points. A control and evaluation unit evaluates the measurement data and generates a safety relevant signal based on the evaluation. The measurement data comprise radial speeds of the measurement points and polarization dependent intensities of the transmitted light remitted or reflected by the measurement points. The control and evaluation unit is configured to segment the measurement points using the radial speeds and the polarization dependent intensities and to combine them into objects and/or object segments.
Description

The invention relates to a device and to a method for detecting objects in a monitored zone.


Optoelectronic sensors such as laser scanners or 3D cameras are frequently used for detecting objects, for example for technical safety monitoring. Sensors used in safety technology have to work particularly reliably and must therefore satisfy high safety demands, for example the EN13849 standard for safety of machinery and the machinery standard IEC61496 or EN61496 for electrosensitive protective equipment (ESPE). To satisfy these safety standards, a series of measures have to be taken such as a secure electronic evaluation by redundant, diverse electronics, functional monitoring or special monitoring of the contamination of optical components.


A machine is safeguarded in DE 10 2007 007 576 A1 in that a plurality of laser scanners record a three-dimensional image of their working space and compare this actual state with a desired state. The laser scanners are positioned at different heights on tripods at the margin of the working space. 3D cameras can also be used instead of laser scanners.


A method and a device for detecting the movements of process units during a production process in a predefined evaluation zone are known from DE 198 43 602 A1. At least two cameras arranged at fixed positions in the evaluation zone are used. Spatial coordinates of each process unit are continuously detected and a translation vector describing the movement of the respective process unit is determined for each spatial coordinate.


U.S. Pat. No. 9,804,576 B2 discloses a recognition-based industrial automation control that is configured to recognize movements of persons, to deduce them for the future, and to compare them with planned automation commands to optionally deduce further safety relevant actions (alarms or changed control commands), with 3D cameras being used to recognize the movements of persons.


DE 10 2006 048 163 B4 describes a camera-based monitoring of moving machines and/or of movable machine elements for collision prevention, with image data of the machine and/or of the movable machine elements being acquired with the aid of an image capturing device. The image capturing system can in particular be a multiocular camera system; LiDAR, sensors, radar sensors, or ultrasound sensors are furthermore named as possible image capturing systems.


A further application field of the object detection by means of optoelectronic sensors is in the area of traffic infrastructure, in particular infrastructure to vehicle, I2V, communication. To provide meaningful data here, the data detected by the optoelectronic sensors can be segmented in different manners. “Semantic segmentation” is here understood as the association of the measurement points detected by the optoelectronic sensors with individual semantic classes. Semantic classes can be so-called “things” (objects having a clearly defined shape such as an automobile or a person) or so-called “stuff” (amorphous background regions, for example a street or the sky). Instance segmentation is understood as the association of measurement points with object instances and with a predefined set of object classes (car 1, car 2, pedestrian 1, etc.). However, only “things” are considered in instance segmentation. “Stuff” is not classified as part of instance segmentation. This limitation of instance segmentation is canceled in panoptic segmentation. Both “things” and “stuff” are divided into instances and classes here.


The image capturing systems typically used for object detection and/or person detection in the prior art, in particular laser scanners and camera systems, have disadvantages that will be looked at in more detail in the following.


Laser scanners or LiDAR (light detection and ranging) sensors are typically based on a direct time of flight measurement of light. In this respect, a light pulse is emitted by the sensor, is reflected at an object, and is detected by the sensor again. The time of flight of the light pulse is determined by the sensor and the distance between the sensor and the object is estimated via the speed of light in the propagation medium (air as a rule). Since the phase of the electromagnetic wave is not taken into account here, an incoherent measurement principle is spoken of. There is the necessity in an incoherent measurement to build up pulses from a plurality of photons to receive the reflected pulse with a sufficient signal-to-noise ratio. The number of photons within a pulse is upwardly limited as a rule by eye protection in an industrial environment. As a consequence, trade-offs result between maximum range, minimal remission of the object, integration time, and the demands on the signal-to-noise ratio of the sensor system. Incoherent radiation at the same wavelength (environmental light) additionally has a direct effect on the dynamic range of the light receiver. Examples for incoherent radiation at the same wavelength are the sun, similar sensor systems, or the identical sensor system via a multipath propagation, that is unwanted reflections.


Camera systems known from the prior art are based on measurement principles such as stereoscopy or indirect time of flight measurement. In indirect time of flight measurement, the phase difference of an AMCW (amplitude modulated continuous wave) transmission signal and its time delayed copy after reflection by an object is determined. The phase difference corresponds to the time of flight and can be converted into a distance value via the speed of light in the propagation medium. Both stereoscopy and indirect time of flight measurement are likewise incoherent measurement processes with the above-named disadvantages.


Millimeter wavelength radar sensors are based on a frequency modulated continuous wave measurement principle (FMCW) and can also determine radial speeds of a detected object using the Doppler effect. The greatest disadvantage of millimeter wavelength radar sensors in comparison with optical technologies is the considerably greater wavelength and the thus lower spatial resolution. Regulatory specifications furthermore limit the radial resolution by limiting the bandwidth and, in a MIMO (multiple input multiple output) radar system, the angular resolution by the number of available virtual antennas (product from the number of transmission and reception antennas). Geometrical physical features are therefore hardly usable in comparison with optical technologies in safety relevant object detection and/or person detection.


A device for detecting objects in a monitored zone is proposed in EP 4 030 188 A1 of the applicant that has an FMCW LiDAR sensor as the optoelectronic sensor and that takes account of the radial speed of a measurement point relative to the sensor determined by the FMCW LiDAR sensor as a further parameter in the segmentation of the measurement points in addition to the usual parameters of location and intensity of a measurement point. Improved segmentation can thereby in particular take place with objects having different radial speeds. If, however, a plurality of objects move at the same or at a similar radial speed, this parameter can only contribute to a limited extent to the segmentation of the measurement points.


It is therefore the object of the invention to improve a device for detecting objects in a monitored zone using an a FMCW-LIDAR sensor.


This object is satisfied by a device and a method for detecting objects in a monitored zone in accordance with the respective independent claim.


The device in accordance with the invention has at least one optoelectronic sensor that is configured as a frequency modulated continuous wave (FMCW) LiDAR sensor and that can, for example, be arranged at a machine, at a vehicle, or at a fixed position. The principles of FMCW LiDAR technology are, described, for example, in the scientific publication “Linear FMCW Laser Radar for Precision Range and Vector Velocity Measurements” (Pierrottet, D., Amzajerdian, F., Petway, L., Barnes, B., Lockard, G., & Rubio, M. (2008). Linear FMCW Laser Radar for Precision Range and Vector Velocity Measurements. MRS Proceedings, 1076, 1076-K04-06. doi:10.1557/PROC-1076-K04-06) or the doctoral thesis “Realization of Integrated Coherent LiDAR” (T. Kim, University of California, Berkeley, 2019. https://escholarship.org/uc/item/1d67v62p).


Unlike a LiDAR sensor based on a time of flight measurement of laser pulses, an FMCW LiDAR sensor does not transmit pulsed transmitted light beams into the monitored zone, but rather continuous transmitted light beams that have a predetermined frequency modulation, that is a time variation of the wavelength of the transmitted light during a measurement, that is a time change of the wavelength of the transmitted light. The measurement frequency is here typically in the range from 10 to 30 Hz. The frequency modulation can be formed, for example, as a periodic up and down modulation. Transmitted light reflected or remitted by measurement points in the monitored zone, has, in comparison with irradiated transmitted light, a time delay corresponding to the time of light that depends on the distance of the measurement point from the sensor and that is accompanied by a frequency shift due to the frequency modulation. Irradiated and reflected transmitted light is coherently superposed in the FMCW LiDAR sensor, with the distance of the measurement point from the sensor being able to be determined from the superposition signal. The measurement principle of coherent superposition inter alia has the advantage in comparison with pulsed or amplitude modulated incoherent LiDAR measurement principles of increased immunity with respect to extraneous light from, for example, other optical sensors/sensor systems or the sun. The spatial resolution is improved with respect to radar sensors having wavelengths in the range of millimeters, whereby geometrical properties of an person become measurable.


If a measurement point moves toward the sensor or away from the sensor at a radial speed, the reflected transmitted light additionally has a Doppler shift. An FMCW LiDAR sensor can determine this change of the transmitted light frequency and can determine the distance and the radial speed of a measurement point from it in a single measurement, that is in a single scan of a measurement point, while at least two measurements, that is two time spaced scans of the same measurement point are required for a determination of the radial speed with a LiDAR sensor based on a time of flight measurement of laser pulses.


The FMCW LiDAR sensor is furthermore configured to detect polarization dependent intensities of the transmitted light reflected or remitted by the measurement points. For this purpose, the FMCW LiDAR sensor has a decoupling unit that is configured to decouple at least some of the transmitted light reflected or remitted by the measurement points in the monitored zone, also called received light in the following, and to conduct it to a polarization analyzer, for example by a beam splitter having a predefined splitting ratio.


The polarization analyzer is configured to measure the polarization dependent intensities of the received light, for example by polarization dependent splitting of the received light by a polarizing beam splitter cube or a metasurface, and measuring the intensities of the split received light by suitable detectors.


On a time-discrete and spatially discrete scan of a three-dimensional monitored zone, an FMCW LiDAR sensor in accordance with the invention can thus detect the following measurement data:








M

j
,
k
,
l


=

(




r

j
,
k
,
l







v

j
,
k
,
l

r






I



j

,
k
,
l







I





"\[LeftBracketingBar]"



"\[RightBracketingBar]"



j

,
k
,
l





)


,




where rj,k,l is the radial distance, vrj,k,l the radial speed, and I⊥,k,l and I∥,j,k,l the polarization dependent intensities of each spatially discrete measurement point j, k with a two-dimensional position (φj, θk) specified by an azimuth angle φ and a polar angle θ for every time-discrete scan I. For better legibility, the index n is used in the following for a single, time-discrete, scanning of a spatially discrete, two dimensional measurement point (φj, θk) in the three-dimensional monitored zone.


To evaluate the measurement data detected by the FMCW LiDAR sensor, the device in accordance with the invention has a control and evaluation unit that is configured to segment the measurement points using the spatially resolved radial speed of the measurement points and the polarization dependent intensities of the transmitted light reflected or remitted by the measurement points and to combine them into objects and/or object segments. Individually movable parts of an object comprising one of a plurality of parts are to be understood as object segments here, for example the members of a human body, the components of a robot arm, or wheels of a vehicle.


The invention has the advantage that an improved segmentation of the measurement data is possible by the use of the spatially resolved radial speed and the polarization dependent intensities of the transmitted light reflected or remitted by the measurement points as an additional parameter. This in particular also applies to known segmentation processes of digital image processing or of machine vision.


The control and evaluation unit can furthermore be configured to determine radial speeds of the objects and/or of the object segments and to extract features of the objects and/or object segment that are based on the radial speeds of the object segments. The extracted features can, for example, be statistical measures such as a mean value or a standard deviation, higher torques, or histograms of the radial speeds of the object and/or object segment that can be characteristic for an object movement and/or an object segment movement.


The control and evaluation unit can advantageously be configured to use the features based on the radial speeds of the objects and/or the object segments for a classification of the objects and/or the object segments. An improved classification of the objects and/or of the object segments is possible by these additional features.


In an embodiment, the control and evaluation unit can be configured to filter the measurement data using the radial speeds of the measurement points. The processing effort can thus already be reduced by data reduction before a segmentation of the measurement points. A filtering can take place, for example, in that measurement points having a radial speed that is smaller than, greater than, or equal to a predefined threshold value are discarded and are not supplied to any further evaluation. Objects and/or object segments that move with the sensor (vr=0) or that move away from the sensor (vr>0), can, for example, be discarded in the event of an anticollision function.


The FMCW LiDAR sensor can be arranged as stationary and can scan a predefined monitored zone. At least one further FMCW LiDAR sensor can preferably be provided that scans a further monitored zone, with the monitored zones being able to overlap. Shading or blind angles in which no object detection is possible can thereby be avoided. If two or more FMCW LiDAR sensors are arranged with respect to one another such that measurement beams are generated that are orthogonal to one another, a speed vector of an object scanned by these measurement beams in the plane spanned by the mutually orthogonal measurement beams can be determined by offsetting these measurement beam pairs.


The FMCW LiDAR sensor can be arranged at a machine, in particular at an automated guided vehicle (AGV) or at a robot. The robot can be entirely in motion (mobile robot) or can carry out movements by means of different axles and joints. The sensor can then co-perform movements of the machine and scan a varying monitored zone.


The sensor can preferably be safe in the sense of the standards named in the introduction or comparable standards. The control and evaluation unit can be integrated in the sensor or can be connected thereto, for instance in the form of a safety controller or of a superior controller that also communicates with the machine control. At least some of the functions can also be implemented in a remote system or in a cloud.


The sensor can preferably be attached to or in the vicinity of a hazardous machine part such as a tool tip. If it is, for example, a robot having a number of axles, their interaction is not relevant to the sensor since the sensor simply tracks the resulting movement at the hazard location.


In a further development of the invention, a plurality of optoelectronic sensors can be attached to the machine to determine the movement of movable parts of the machine. Complex machines can thus also be monitored in which a punctiform determination of the movement is not sufficient. An example is a robot having a plurality of robot arms and possibly joints. At least one stationary sensor, that is an optoelectronic sensor not moved together with the machine, can additionally observe the machine.


In an embodiment of the invention, the device can be configured for traffic monitoring, with the control and evaluation device being able to be configured to associate the measurement points to vehicle categories using the measurement data, in particular the radial speeds and the polarization dependent intensities, and to evaluate them vehicle category specifically, for example for monitoring vehicle category specific speed restrictions (e.g. 80 k.p.h. for a truck and 120 k.p.h. for a passenger vehicle).


In an embodiment of the invention, the device can be configured for measuring speeding at low speeds, in particular at speeds below 30 k.p.h.


In an embodiment of the invention, the device can be configured for license plate recognition of vehicles, with the control and evaluation device being able to be configured with a sufficient spatial resolution of the FMCW LiDAR sensor to detect a license plate of a vehicle without a further camera based sensor system by the segmentation of the measurement points. In an alternative embodiment for the license plate recognition of vehicles, the control and evaluation device can be configured to trigger a camera on an insufficient spatial resolution of the FMCW LiDAR sensor and to set its optimum integration time corresponding to the vehicle speed to generate an optimum camera image.


In a further embodiment, the device can be configured for a measurement of a traffic flow, with the control and evaluation unit being configured to determine a measure for the traffic flow by segmentation of the measurement points into dynamic and static objects, for example by defining a static region such as a road as a 2D or 3D region of interest (ROI) in the monitored zone of the sensor and calculation of a mean radial speed of all the measurement points within this ROI.


The traffic flow measurement can be coupled with a classification of the segmented objects to thus separately determine the traffic flow for different classes of road users.


In a further embodiment, the device can be configured for the tracking and trajectory prediction of road users, with the control and evaluation unit being configured to associate measurement points with road users to calculate trajectory predictions of the roads users using the radial speeds of the measurement points associated with the road users and to forward the trajectory predictions to autonomous vehicles for driving decision making. If 3D speed vectors of the road users are known, either by offsetting measured radial speeds of orthogonal FMCW LiDAR measurement beam pairs or by merging data of further sensor modalities such as radar, camera, or LiDAR sensors, they can additionally improve the trajectory prediction.


The method in accordance with the invention can be further developed in a similar manner and shows similar advantages in so doing. Such advantageous features are described in an exemplary, but not exclusive manner in the subordinate claims dependent on the independent claims.





The invention will be explained in more detail in the following also with respect to further features and advantages by way of example with reference to embodiments and to the enclosed drawing. The Figures of the drawing show in:



FIG. 1 an example for a radial speed measurement using an FMCW LiDAR sensor;



FIG. 2 a schematic representation of a device in accordance with the invention for monitoring a robots;



FIG. 3 a schematic representation of a device in accordance with the invention for traffic monitoring;



FIG. 4 a flowchart for an exemplary processing of measurement data of an FMCW LiDAR sensor;



FIG. 5 an exemplary flowchart for monitoring a movement of a robot using a method in accordance with the invention; and



FIG. 6 an exemplary flowchart for avoiding a collision of two vehicles in an I2V environment using a method in accordance with the invention.





The concept of the radial speed measurement using an FMCW LiDAR sensor 12 is shown for a three-dimensional example in FIG. 1. If an object 38 moves along a direction of movement 40 relative to the FMCW LiDAR sensor 12, the FMCW LiDAR sensor 12 can determine the radial speed vr of the measurement point 20 of the object 38 in the direction of the FMCW LiDAR sensor 12 in addition to the radial distance r of a measurement point 20 scanned once by a transmitted light beam 14 in a time-discrete manner at an azimuth angle φ and a polar angle θ. The FMCW LiDAR sensor 12 additionally has a polarization analyzer (not shown) that is configured to measure polarization dependent intensities I, I of the transmitted light beam 14 remitted or reflected by the measurement point 20.


This information (radial distance r, radial speed vr, polarization dependent intensities I, I) are directly available with a measurement, that is a time discrete scanning of the measuring point 20. Unlike measurement processes that only deliver spatially resolved radial distances, that is three-dimensional positions, the necessity of a second measurement and in particular the necessity of first determining the measurement points that correspond to the measurement points of the first measurement in the measurement data of the second measurement is thus dispensed with for the identification of moving objects.


In the case of a static FMCW LiDAR sensor, every measurement point having a radial speed of zero is as a rule associated with a static object provided that the latter does not move tangentially to the measurement beam of the sensor. Due to the finite object extent and to the high spatial resolution of the FMCW LiDAR sensor, practically every moving object will have at least one measurement point 20 having a radial speed vrn with respect to the FMCW LiDAR sensor 12 different from zero. Static and moving objects or objects moving away or approaching in mobile applications can therefore already be distinguished by one measurement of the FMCW LiDAR sensor 12. With an anti-collision monitoring, for example, measurement points moving away respectively objects moving away can thus be discarded. Processing efforts in the further evaluation of the measurement data are reduced by a corresponding data reduction.



FIG. 2 shows a schematic representation of a device 10 in accordance with the invention for monitoring a robot 24. An FMCW LiDAR sensor 12 transmits transmitted light beams 14.1, . . . , 14,n into a three-dimensional monitored zone 16 and generates measurement data Mn 18 from transmitted light reflected or remitted back to the FMCW LiDAR sensor 12 by measurement points 20.1, . . . , 20.n in the monitored zone 16. A limited number of exemplary transmitted light beams 14.1, . . . , 14.n and measurement points 20.1, . . . , 20.n is shown; the actual number results from the size of the monitored zone 16 and the spatial resolution of the scan. The measurement points 20.1, . . . , 20.n can represent persons 22, robots 24, or also boundaries of the monitored zone such as floors 39 or walls in the monitored zone 16.


The measurement data Mn 18 of the FMCW LiDAR sensor 12 received by the control and evaluation unit 32 comprise the radial distances rn of the measurement points 20.1, . . . , 20.n from the FMCW LiDAR sensor 12, the polarization dependent intensities I⊥n, I∥n of the transmitted light reflected or remitted by the measurement points 20.1, . . . , 20.n, and the radial speeds vrn of the measurement point 20.1, . . . 20.n for every time discrete scan, where the radial speed component vrn is the speed component of a measurement point 20.1, . . . , 20.n at which the measurement point 20.1, . . . , 20.n moves toward the FMCW LiDAR sensor 12 or away from the FMCW LiDAR sensor 12.


The measurement data Mn 18 are evaluated by a control and evaluation unit 32, with the control and evaluation unit 32 being configured to segment the measurement points 20.1, . . . , 20.n using the radial speeds vrn of the measurement points 20.1, . . . , 20.n and the polarization dependent intensities I⊥n, I∥n, of the transmitted light reflected or remitted by the measurement points 20.1, . . . , 20.n and to combine them into object segments 22.1, 22.2, 22.3, 24.1, 24.2, 24.3 and/or objects. Based on said detection, the control and evaluation unit 32 can generate a safety relevant signal for triggering a safety relevant action. The safety relevant action can, for example, be the activation of a warning light 34 or the stopping of the robot 24. In the embodiment, the control and evaluation unit 32 is directly connected to the warning light 34 and to the robot 24, that is it triggers the safety relevant action itself. Alternatively, the control and evaluation unit 32 can forward a safety relevant signal to a superior safety controller (not shown) via an interface 36 or the control and evaluation unit 32 itself can be part of a safety controller.



FIG. 3 shows a schematic representation of a device 90 in accordance with the invention for traffic monitoring. An FMCW LiDAR sensor 92 arranged at a so-called toll gantry or traffic sign gantry 94 to detect vehicles, in this case a truck 96 and a car 98 on a road 100. The FMCW LiDAR sensor 92 transmits transmitted light beams 102.1, . . . , 102.n into a three-dimensional monitored zone 106 of the sensor 92 and generates measurement data from transmitted light reflected or remitted back to the sensor 92 by measurement points 104.1, . . . , 104.n in the monitored zone 106. The FMCW LiDAR sensor 92 is arranged above or laterally to the road 100 to be monitored such that both vehicles 96, 98 are detected simultaneously by the FMCW LiDAR sensor 92, that is during a time discrete scanning of the monitored zone 106 by the FMCW LiDAR sensor 92.


The FMCW LiDAR sensor 92 is configured to detect polarization dependent intensities I⊥n, I∥n of the transmitted light reflected or remitted by the measurement points 104.1, . . . , 104.n so that the measurement data generated by the FMCW LiDAR sensor 92 comprise radial distances rn and radial speeds vrn of the measurement points 104.1, . . . , 104.n and the polarization dependent intensities I⊥n, I∥n of the transmitted light reflected or remitted by the measurement points 104.1, . . . , 104.n, where the radial speed vrn is the speed component of a measurement point 104.1, . . . , 104.n at which the measurement point 104.1, . . . , 104.n moves toward the FMCW LiDAR sensor 92 or away from the FMCW LiDAR sensor 92.


The measurement data are evaluated by a control and evaluation unit 32 (not shown), with the control and evaluation unit 32 being configured to segment the measurement points 104.1, . . . , 104.n using the radial speeds vrn of the measurement points 104.1, . . . , 104.n and the polarization dependent intensities I⊥n, I∥n of the transmitted light reflected or remitted by the measurement points 104.1, . . . , 104.n and to combine them into object segments, in this case vehicle parts 96.1, 96.2, 96.3, 98.1, 98.2 and/or objects, in this case vehicles 96, 98.


With vehicles 96, 98 driving next to one another at the same speed, the measured radial speed vrn of the measurement points 104.1, . . . , 104.n will not differ or will only differ insubstantially. The segmentation of the measurement points 104.1, . . . , 104.n can be improved by the use of the polarization dependent intensities I⊥n, I∥n of the transmitted light reflected or remitted by the measurement points 104.1, . . . , 104.n since the polarization dependent intensities I⊥n, I∥n differ as a rule due to different surface properties of the object segments 96.1, 96.2, 96.3, 98.1, 98.2 and/or objects 96, 98.



FIG. 4 shows an exemplary processing in accordance with the invention of the measurement data detected by the FMCW LiDAR sensor by the control and evaluation unit in a flowchart 42. After the reception 44 of the measured data, the measurement points 20.1, . . . , 20.n, 104.1, . . . , 104.n are segmented in a segmentation step and combined into objects 22, 24, 96, 98, 100 and/or object segments 22.1, 22.2, 22.3, 24.1, 24.2, 24.3, 96.1, 96.2, 96.3, 98.1, 98.2 with the spatially resolved radial speeds vrn of the measurement points 20.1, . . . , 20.n, 104.1, . . . , 104.n and the polarization dependent intensities I⊥n, I∥n of the transmitted light reflected or remitted by the measurement points 104.1, . . . , 104.n being considered in addition to the spatial coordinates of the measurement points typically used for the segmentation 46. Object segments can, for example, be individual movable components 24.1, 24.2, 24.3 of a robot 24, body parts 22.1, 22.2, 22.3 of a person 22, or vehicle parts 96.1, 96.2, 96.3, 98.1, 98.2.


The segmentation 46 can take place in accordance with known processes of digital image processing or of machine vision such as

    • pixel oriented processes in a gray scale image by means of threshold processes;
    • edge oriented processes such as the Sobel or Laplace operator and a gradient search;
    • region oriented processes such as “region growing”, “region splitting”, “pyramid linking”, or “split and merge”;
    • model based processes such as the Hough transformation; or
    • texture oriented processes.


Special processes for segmenting three-dimensional datasets are furthermore known under the term “range segmentation”. The “range segmentation” is, for example, described in the following scientific publications:

    • “Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation” (Bogoslayskyi et al., 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, DOI: 10.1109/IROS.2016.7759050)
    • “Laser-based segment classification using a mixture of bag-of-words”. (Behley et al., 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, DOI: 10.1109/IROS.2013.6696957)
    • “On the segmentation of 3d lidar point clouds” (Douillard et al., 2011 IEEE International Conference on Robotics and Automation, DOI: 10.1109/ICRA.2011.5979818)


The segmentation 46 of the measurement points 20.1, . . . , 20.n, 104.1, . . . , 104.n can take place more efficiently and accurately using the above-named processes by the use of the radial speed vrn in addition to the radial distance rn and the intensity In of the measurement points 20.1, . . . , 20.n, 104.1, . . . , 104.n. Measurement points 20.1, . . . 20.n, 104.1, . . . , 104.n having radial speeds vrn smaller than, greater than, or equal to a predefined threshold value can be discarded and not supplied to any further evaluation. In the case of an anticollision function, for example, measurement points of an object and/or object segment that move with the sensor (vr=0) or that move away from the sensor (vr>0), can be discarded. If an object and/or object segment is/are scanned by a plurality of spatially discrete measurement points and if the associated radial speeds are distinguishable, static and dynamic objects and/or object segments can be distinguished and thus stationary objects and/or object segments such as floors 30, lanes 100, or wall can already be discarded before or during the segmentation 46 of the measurement points 20.1, . . . , 20.n, 104.1, . . . , 104.n and the processing effort can be reduced by data reduction. The measurement points designated 20.1, . . . , 20.n, 104.1, . . . , 104.n can also be segmented more accurately if they have similar or identical radial speeds vrn by the use of the polarization dependent intensities I⊥n, I∥n of the transmitted light reflected or remitted by the measurement points 20.1, . . . , 20.n, 104.1, . . . 104.n.


In the next step, a feature extraction 48 of the objects 22, 22, 24, 30, 96, 98 and/or object segments 22.1, 22.2, 22.3, 24.1, 24.2, 24.3, 96.1, 96.2, 96.3, 98.1, 98.2 defined during the segmentation 46 takes place. Typical features that can be extracted in the processing of the measurement data from the objects 22, 24, 30, 96 and/or object segments 22.1, 22.2, 22.3, 24.1, 24.2, 24.3., 96.1, 96.2, 96.3, 98.198.2 are, for example, the width, number of measurement points or the length of the periphery of the objects and/or object segments or further features such are described, for example, in the scientific publication “A Layered Approach to People Detection in 3D Range Data” (Spinello et al., Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010). In accordance with the invention, these features can be expanded by features that are based on the radial speeds of the objects 22, 24, 30, 96 and/or object segments 22.1, 22.2, 22.3, 24.1, 24.2, 24.3, 96.1, 96.2, 96.3, 98.1, 98.2. For this purpose, radial speeds of the objects and/or object segments are first determined, for example by the application of trigonometric functions to the radial speeds of the measurement points representing the respective object and/or object segment. Statistical measurements of the radial speeds of the objects and/or object segments such as the mean value, standard deviation, higher torques, or histograms that are characteristic for movements of a robot and/or person can then be used as additional object features or object segment features, for example.


After the feature extraction 48, a classification 50 of the objects 22, 24, 96 and/or object segments 22.1, 22.2, 22.1, 22.2, 22.3, 24.1, 24.2, 24.3, 96.1, 96.296.3, 98.1, 98.2 takes place using known classification processes such as Bayes classifiers, support vector machines, or artificial neural networks. The feature space is searched for groups of features that define an object as part of the classification. In this respect, the above-listed statistical measurements of the radial sped of individual objects 22, 24, 30, 96, 98, 100 and/or object segments 22.1, 22.2, 22.3, 24.1, 24.2, 24.3, 96.1, 96.2, 96.3, 98.1, 98.2 can be used here in combination with a priori information to define feature spaces that can, for example, classify persons 22 or vehicles 96 based on their radial speed and can thus distinguish them.


In a further step 52, the determination of a movement pattern of at least one of the object segments 22.1, 22.2, 22.3, 24.1, 24.2, 24.3, 96.1, 96.2, 96.3, 98.1, 98.2 now takes place using the radial speeds of the measurement points 20.1, . . . , 20.n, 104.1, . . . , 104.n associated with the at least one object segment.


The result of the determination of the movement pattern 52 can be further processed by the control and evaluation unit 32 after the output 54, for example to generate a safety relevant signal, to recognize a state of an object segment, or can be forwarded to a superior controller (not shown) via the interface 36.



FIG. 5 shows an exemplary flowchart 54 for monitoring a movement of a robot using a method in accordance with the invention. As described above, the steps of segmentation 46 of the measurement data Mn, feature extraction 48, and classification 50 take place after reception 44 of the measurement data Mn. A determination 56 of representative parameters such as radial distances, intensities, and radial speeds of the segments 24.1, 24.2, 24.3 takes place for segments 24.1, 24.2, 24.3 of the robot arm identified in the classification 50.


A recognition of a movement pattern 58 takes place based on the measured radial speeds of the measurement points associated with previously classified segments 24.1, 24.2, 24.3. Unlike the typical determination of a “rigid scene flow” based on 3D position data such as described, for example, in

    • Dewan, Ayush, et al. “Rigid scene flow for 3d lidar scans.” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016.


      or
    • Liu, Xingyu, Charles R. Qi, and Leonidas J. Guibas. “Flownet3d: Learning scene flow in 3d point clouds.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019,


      the measured radial speed values can be used directly for recognizing a movement pattern so that in particular two scans Mn,l and Mn,l-1 of the monitored zone consecutive in time are not absolutely necessary. In a comparison step 60, a comparison of the movement pattern 58 with a priori information on expected desired movements of the segments 24.1, 24.2, 24.3 of the robot arm takes place. On a negative result of the comparison 60 (for example a movement deviation above a specified degree of tolerance), a safety relevant action 62 is initiated, for example a switching off of the robot 24.



FIG. 6 shows an exemplary flowchart 66 for avoiding a collision of two vehicles 96, 98 in an I2V environment using a method in accordance with the invention. After reception 44 of the measurement data Mn by the FMCW LiDAR sensor 92, the steps as described above take place of segmentation 46 of the measurement data, feature extraction 48, and classification 50 to identify vehicle parts 96.1, 96.2, 96.3, 98.1, 98.2 and/or objects and/or the vehicles 96, 98 themselves. In the following step, a movement forecast 68 takes place of the vehicle parts 96.1, 96.2, 96.3, 98.1, 98.2 and/or of the vehicles 96, 98 by means of a Kalman filter. In comparison with implementations of a Kalman filter known from radar technology, for example, the higher spatial resolution of an FMCW LiDAR improves its performance. A time to collision (TTC) can be determined 70 from the forecast movements as a quantitative measure of the risk of collision and, on a negative comparison result, for example a risk of collision, a warning signal can be transmitted to the vehicles 96, 98 as part of the infrastructure to vehicle (I2V) communication.

Claims
  • 1. A device for detecting objects in a monitored zone comprising at least one FMCW LiDAR sensor for transmitting transmitted light beams into the monitored zone for scanning a plurality of measurement points and for generating measurement data from transmitted light remitted or reflected by the measurement points, with the measurement data comprising radial speeds of the measurement points; anda control and evaluation unit for evaluating the measurement data, wherein
  • 2. The device in accordance with claim 1, wherein the control and evaluation unit is configured to filter the measurement points using the polarization dependent intensities of the transmitted light remitted or reflected by the measurement points and/or the radial speed of the measurement points.
  • 3. The device in accordance with claim 1, wherein the control and evaluation unit is configured to determine radial speeds of the objects and/or object segments and to extract features of the objects and/or object segments using the radial speeds of the objects and/or object segments.
  • 4. The device in accordance with claim 3, wherein the control and evaluation unit is configured to classify the objects and/or object segments using the radial speeds of the objects and/or object segments.
  • 5. The device in accordance with claim 1, wherein the control and evaluation unit is configured to discard measurement points having a radial speed under a predefined threshold value for the evaluation.
  • 6. The device in accordance with claim 1, wherein the FMCW LiDAR sensor is stationary.
  • 7. The device in accordance with claim 6, wherein the device has at least one further FMCW LiDAR sensor having a further monitored zone and the monitored zone at least partly overlaps the further monitored zone.
  • 8. The device in accordance with claim 1, wherein the FMCW LiDAR sensor is movable.
  • 9. The device in accordance with claim 8, wherein the FMCW LiDAR sensor is fastened to a robot arm.
  • 10. The device in accordance with claim 9, wherein the FMCW LiDAR sensor is fastened to a driverless transport vehicle.
  • 11. A method of detecting objects in a monitored zone, said method comprising the steps: transmitting transmitted light beams into the monitored zone by at least one FMCW LiDAR sensor;scanning a plurality of measurement points in the monitored zone;generating measurement data from transmitted light remitted or reflected by the measurement points, with the measurement data comprising radial speeds of the measurement points and polarization dependent intensities of the transmitted light remitted or reflected by the measurement points; andevaluating the measurement data, with the measurement points being segmented using the radial speeds of the measurement points and polarization dependent intensities of the transmitted light remitted or reflected by the measurement points, and being combined into objects and/or object segments.
  • 12. The method in accordance with claim 11, comprising the further steps: determining radial speeds of the objects and/or object segments; and
  • 13. The method in accordance with claim 12, comprising the further step: classifying the objects and/or object segments using the radial speeds of the objects and/or object segments.
  • 14. The method in accordance with claim 11, comprising the further step: filtering the measurement points using the radial speed of the measurement points and/or of the polarization dependent intensities of the transmitted light remitted or reflected by the measurement points.
  • 15. The method in accordance with claim 14, wherein measurement points having a radial speed under a predefined threshold value are discarded for the evaluation.
Priority Claims (1)
Number Date Country Kind
22186057.0 Jul 2022 EP regional