The invention refers to a sensor for controlling automatic doors.
Sensor for controlling an automatic door, where the sensor comprises a laser scanner for detecting the presence of an object within a predefined detection area of its laser curtain, where the sensor comprises a presence detection output port to which a presence detection signal is fed. This allows a safe operation of the door. The laser scanner derives points of reflection by a distance measurement using “Time of Flight” technology.
Such a door sensors are optimized with regard to their behavior of passing objects that are usually persons.
According to this, Nishida Daiki et al “Development of intelligent automatic door system” 2004, IEEE International conference on robotics and automation, 31 May 2014, pages 6368-6374, discloses an intelligent door sensor that includes the evaluation of speed and direction into its control decisions.
In addition to door control sensors, sensors for detecting persons are known, which evaluate whether or not an detected object is a human being or not.
Akamatsu Shun-Ichi et al. “Development of a person counting system using 3D laser scanner”, 2014 IEEE International conference on robotics and BIOMIMETICS, IEEE, 5 Dec. 2014, pages 1983-1988, discloses a laser scanner for counting persons, only. The counting application is restricted to persons does not provide any presence detection in order to control an automatic door.
WO 2012/042043 A1 also discloses a person detection unit to apply an access control system based on a laser scanner. The system e.g. forbids access if two or more persons attempt to enter a door at the same time.
It is the object of the invention to improve the control possibilities of a sensor to allow a more specific behavior.
In a known manner, a door sensor for detecting the presence of an object within a predefined detection area of the scanning field comprises a laser scanner with at least one laser curtain, where the sensor comprises a distance data acquisition unit that is embodied to acquire the distances of the points of reflection of the reflected laser beam of the laser scanner from an object by evaluation of time of flight.
The sensor further comprises a presence detection unit that receives the distance data information as result of the distance data acquisition unit, where the distance data acquisition unit forwards the distance data information to the presence detection unit. The presence detection unit evaluates if an object is detected within the predefined detection area by analysing the distance data. The presence detection unit is embodied to create a presence detection information that is fed to an at least one sensor output port. Usually this signal is used by door controllers for safety purposes.
According to the invention the sensor further comprises an object information unit comprising a human body identification unit, where the object information unit receives the distance data, and the human body identification unit uses the distance data to determine if the detected object is a human body, where the object information unit creates an object information that is fed to the at least on output port.
Preferably the signal of the presence detection unit is processed in real-time, where the result of the human body identification unit is based on an accumulation of distance data. For example, the presence detection signal may have a response time being less than 90 ms. The sensor is able to detect objects that are smaller than 10 cm.
According to the additional information gathered a door controller may act differently by detecting the presence of a human body and the presence of a non-human body.
Due to a further aspect of the invention the object information unit may comprise a counting unit to count the number of human bodies detected by the sensor, so that the counting information could be provided on an output port.
In addition to basic information that is essential for controlling and/or safeguarding an automatic door, further additional information like the counting information can be used for controlling a door e.g. keep it closed after a certain number of human bodies has entered. The additional information could be derived for statistical purposes.
Furthermore, the laser scanner may generate multiple laser curtains and the object information unit comprises a motion detection unit for detecting motion and especially identifying the moving direction of an object.
This object information can be used to control the automatic door, for example trigger the opening of the door once an approaching object is detected. The object information can, therefore, put some sort of approaching signal to the at least one output port, independent of the object type.
Beside this, by deriving the information whether an object is a human body and the information of its direction, a more precise counting can take place. According to this option a net count can be defined in a certain direction.
According to an embodiment of the invention, the sensor comprises one output port where the presence detection information and the object information are fed to the same at least on output port. A CAN or a LON-Bus may be suitable output ports for supporting both types of information.
In a further aspect of the invention the sensor comprises at least two output separate ports where a first output port is dedicated to the presence information and where a second output port is dedicated to object information. A first output port comprises a relay output, where as the second output port could for example be based on an ethernet protocol.
The human body identification unit may be embodied as a computer implemented method on a processing unit, e.g. a microprocessor that runs a computer implemented procedure and may contain further parts of programs being further units.
The method for determination of a human body based on the distance of the measured points of reflection is described in detail below.
The human body identification unit comprises an evaluation unit that combines the distance information of the points of reflection with the direction of the pulse to retrieve a position within a monitored area, the evaluation unit combines the points of reflection belonging to an detected object in an evaluation plane having a z-axis that is related to the height and a perpendicular one to the Z-axis related to the width in the direction of the lateral extension of the laser curtain.
According to the invention, the evaluation plane is evaluated based on density distribution over the Z-axis and the evaluation result is compared to anthropometric parameters by the evaluation unit.
The monitored area is defined by the laser curtain and has a vertical height direction and two lateral directions, a depth and a width, where all are perpendicular to one another. In case of a single vertical laser curtain the depth of the monitored area equals the depth of the laser curtain.
The evaluation plane may have a Z-axis that matches the vertical axis of the vertical plane and/or an evaluation width extension that matches the width of the monitored area. Nevertheless, the Z-axis e.g. may be defined along a laser curtain inclined to the vertical direction, but the width may still correspond to the width of the laser curtain.
Anthropometric parameters according to the invention are human body measures and/or human body proportions.
Anthropometric parameters especially are parameters that especially relate to height, width, shoulder width, shoulder height, head width, total height of a human body.
Based on the density distribution in the evaluation plane the evaluation unit decides, whether or not the density distribution corresponds to that of a human body.
To determine whether a detected object is a human body, the density distribution along the Z-axis is evaluated, where the Z-axis represents the height of a detected object. The density distribution corresponding to a human body comprises two peaks, where one peak is approximately at the top of the head the second peak is approximately at the top of the shoulder.
The determination is preferably done to determine the ratio of the height of the head to the height of the shoulder. As the ratio head to shoulder height is an anthropometric parameter that is essentially equal for all human beings and above all is not dependent on absolute height a reliable distinction of human beings is possible according to the evaluation of the density distribution.
In addition to the density distribution the evaluation unit may evaluate the width of an object in a further step. Therefore, it analyses the points of reflection in the evaluation plane belonging to an object at the position of the peaks of density distribution and determines the effective width of head and shoulder of the human body.
Due to integration of this information the evaluation can be achieved in a more precise manner. A valid head and shoulder width ratio can be predefined to check whether it matches the result derived from the evaluation density distribution evaluation. The result can be compared to the result of the density evaluation. If both evaluations are positive it is quite likely that the detected object is a human body.
Furthermore, the evaluation unit may count the number of points of reflection within the peak zones of the density distribution evaluation. If the number is below a predefined number, the measurement will be disregarded.
The movement of the human body takes place in a moving direction, where the moving direction basically is a vector of width and depth. Especially in door applications the moving direction is perpendicular to the width direction and therefore, the orientation of the shoulders of a human body is usually aligned with the width direction.
According to the invention, single evaluation objects can be identified out of all points of reflection of the evaluation plane and a subset of points of reflection is created for each evaluation object, which is then subjected to density distribution analysis.
According to this there can be a decision on each present evaluation object whether or not it corresponds to a human body. As a consequence, detection sensors can base their decision on controlling doors or lights on the information whether a detected object is a human body or not.
The determination of single evaluation objects is done by the evaluation unit, where the evaluation plane, containing all points of reflection, is parsed by a neighbor zone, from the top to the bottom of the plane. Once a point or points of reflection are newly present in the neighbor zone, all the points of reflection within the neighbor zone are taken into account and the newly present point of reflection is assigned to an evaluation object. It is assigned to a new evaluation object, if there is no other point atop the newly present point within the neighbor zone, or to an existing evaluation object where the point of reflection has the smallest distance to the mathematical center of gravity of an existing evaluation object.
According to this procedure all points of reflection are grouped in a subset of points of reflection belonging to an evaluation object.
According to this evaluation even two or more people walking in parallel through the laser curtain can be distinguished.
According to a further improvement of the invention the points of reflection can be time integrated on the evaluation plane. This leads to a higher density of points of reflection and, therefore, evaluation objects can be better distinguished and detected objects can be classified in a more reliable way.
The time integration can be done based on a fixed time interval after a first detection of a detected object occurred.
According to a further improvement of the invention the time integration is done in a way that the subset of points of reflection is assigned to a time object by projecting the points of reflection in a width-time plane, where the height of the point of reflection is ignored. The width axis stretches depending on a predefined accumulation/integration time.
The points of reflection projected into the time-width plane are clustered as subsets assigned to time objects. Each time object is the main-set of points of reflection to generate the evaluation plane, where the time component of the point of reflection is neglected but the height is taken into account.
According to this procedure a more precise decision on the delimitation of time objects is possible.
Therefore, the acquired information is more accurate with regard to the amount of human beings passing subsequently.
The clustering of the time objects is preferably done by using DBSCAN Algorithm.
Preferably, the scanner generates multiple laser curtains that are tilted with respect to each other. Due to several laser curtains a more precise picture can be taken and the motion direction of the object can be taken into account.
The scanner preferably evaluates and/or generates multiple laser curtains subsequently.
As by taking into account at least two curtains, which are tilted relative to each other, two depth positions perpendicular to the width of the scanning plane can be evaluated. As the two planes are scanned subsequently the movement direction of a human being can be detected as the center of gravity in scanning time changes in the time width diagram in the moving direction of the detected object.
By using multiple laser curtains, a predefined accumulation time for time integration is longer or is equal to the time that is necessary for scanning the present laser curtains of the sensor.
The evaluation unit may not accept points of reflection that clearly refer to background effects. Therefore, background noise can be reduced at this stage.
The invention further refers to a human recognition sensor for analysing an object in a monitored area and deciding whether or not the object is a human body, comprising a laser scanner and an evaluation unit that is enabled to execute a method as described above.
A further aspect refers to a sensor that generates at least one laser curtain that is tilted less than 45° relative to the vertical axis. This allows an overhead scanning so that human bodies may be recognized when passing below the sensor.
The human recognition sensor may comprise a computational unit, preferably a microprocessor, microcontroller or FPGA on which the evaluation unit is implemented as software program, executing the above described method.
Further advantages, features and potential applications of the present invention may be gathered from the description which follows, in conjunction with the embodiments illustrated in the drawings.
Throughout the description, the claims and the drawings, those terms and associated reference signs will be used as are notable from the enclosed list of reference signs. In the drawings is shown
The laser scanner of the embodiment according to
According to this setup the evaluation unit 16 receives the data of the point of reflection with regard to the laser scanner.
The evaluation unit 16 then analyses the point of reflections according to the invention as will be further described in the following figures and as a result will output a signal containing information whether or not an detected object as a human body.
In difference to the example of
A further difference is shown in
The method how the distance data are forwarded is independent of the solution of using a common output port or separate output ports. Therefore these aspects can be combined on demand.
The evaluation unit of the sensor 20 is set in a way that it evaluates and evaluation plane EP that matches the laser curtain 22. Therefore the evaluation plane EP has a Z-axis in a vertical direction and the same width axis W as has the laser curtain 22.
According to invention the evaluation unit 16 now computes a density distribution along Z-axis of the evaluation plane EP, where in this density distribution two peaks are supposed to be derivable.
If there is e.g. only one peak, the measurement is discarded and the evaluation object is not identified as a human body.
If there are two peaks 24, 26, as would be the case by detecting a human body the position H1, H2 of the position of the peaks on the Z-axis is taken. The first peak 24 is assumed to provide the overall height H1 of the object, being the head when viewing the human body and the second peak 26 is supposed to be the shoulder height H2 of a person. The ratio of overall height H1 and shoulder height H2 is compared to a range of predefined human body proportions. Furthermore, the head height (the distance between shoulder height and overall height; H1-H2) may be taken into account as well, as human body proportions change with age of the human beings.
According to this it is not necessary to limit the measurement to a minimum height that possibly might exclude children from detection, as they can be defined according to the above described evaluation.
Within the evaluation plane EP the width W2 of the shoulders the position H2 of the second density peak 26 can be determined. In the area of the first peak 24, the width of the head W1 can be determined. Due to these further parameters more precise evaluation of the object with regard to human body recognition can be achieved.
The laser scanner of the human recognition sensor 30 derives the position of the points of reflection of the detected object relative to the laser scanner, where the evaluation unit projects them into the evaluation plane EP as evaluation objects.
The persons P, when moving through the laser curtains 32, 34, produce points of reflection during an acquisition period.
As described in
In this time width-plane the present points of reflection are clustered to time-objects TO_1, TO_2, TO_3. This is done by using the DBSCAN algorithm.
The four detected objects passing the laser curtain during the acquisition period in this case lead to the definition of three time objects TO_1, TO_2, TO_3.
As shown in an enlarged view of the time-object TO_2 that there could be more detected objects in the time object TO_2.
The evaluation unit is further furnished to take the points of reflection of each time object and projects them into the evaluation plane EP, as shown in
In a next separation step the evaluation unit assigns the points of reflection of each time object TO_1, TO_2, TO_3 to objects.
This is done by analyzing the evaluation plane EP from top to the bottom and assigning each point to an evaluation object.
The determination of single evaluation objects O1 is done by the evaluation unit, where the evaluation plane EP contains all points of reflection of the time-object TO_2. The evaluation plane EP is parsed by a neighbor zone 40 from the top to the bottom of the evaluation plane EP. Once a point or points of reflection are newly present in the neighbor zone 40, all the points of reflection within the neighbor zone 40 are taken into account and the newly present point of reflection is assigned to an evaluation object; e.g. see
As a result
Each object in this evaluation plane as shown in
According to a further improvement of the invention the evaluation unit may be enabled to analyse the moving direction of objects. This enables the human recognition sensor to provide direction information with the object information. E.g. this allows a count on how many people entered or left a building or to do the counting itself and just to provide the net count on the output port.
The moving direction is analyzed by comparing the accumulated points of reflection of the two curtains 32, 34 over a short period of time e.g. 500 ms. The points of reflection are projected into a time with plane, in which the mathematical center of gravity of the present points of reflection is determined for each curtain.
According to the shift of the center of gravity, indicated by the cross in
Number | Date | Country | Kind |
---|---|---|---|
17165848 | Apr 2017 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/059187 | 4/10/2018 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2018/189192 | 10/18/2018 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7042492 | Spinelli | May 2006 | B2 |
7274438 | Doemens | Sep 2007 | B2 |
7495556 | Eublen et al. | Feb 2009 | B2 |
7701557 | Doemens et al. | Apr 2010 | B2 |
7940300 | Spinelli | May 2011 | B2 |
8523667 | Clavin | Sep 2013 | B2 |
8955253 | Kanki et al. | Feb 2015 | B2 |
9100367 | Akashika | Aug 2015 | B2 |
9497840 | Wehrens | Nov 2016 | B2 |
9799044 | Takahashi | Oct 2017 | B2 |
20010030689 | Spinelli | Oct 2001 | A1 |
20050078297 | Doemens et al. | Apr 2005 | A1 |
20060139453 | Spinelli | Jun 2006 | A1 |
20060169876 | Zambon | Aug 2006 | A1 |
20060187037 | Eubelen | Aug 2006 | A1 |
20070181786 | Doemens et al. | Aug 2007 | A1 |
20100076651 | Nakakura | Mar 2010 | A1 |
20100191369 | Kim | Jul 2010 | A1 |
20100265049 | Koike | Oct 2010 | A1 |
20110176000 | Budge | Jul 2011 | A1 |
20110237324 | Clavin | Sep 2011 | A1 |
20110249263 | Beck | Oct 2011 | A1 |
20110260848 | Rodriguez Barros | Oct 2011 | A1 |
20120042043 | Akashika et al. | Feb 2012 | A1 |
20130094705 | Tyagi | Apr 2013 | A1 |
20130135438 | Lee et al. | May 2013 | A1 |
20130201291 | Liu | Aug 2013 | A1 |
20130255154 | Kanki et al. | Oct 2013 | A1 |
20150083936 | Wehrens | Mar 2015 | A1 |
20150143459 | Molnar | May 2015 | A1 |
20150186903 | Takahashi et al. | Jul 2015 | A1 |
20150259967 | Kamisawa | Sep 2015 | A1 |
20150261304 | Kamisawa | Sep 2015 | A1 |
20160012297 | Kanga | Jan 2016 | A1 |
20160298809 | Lutz | Oct 2016 | A1 |
20160309065 | Karafin | Oct 2016 | A1 |
20160349835 | Shapira | Dec 2016 | A1 |
20170243373 | Bevensee | Aug 2017 | A1 |
20170300757 | Wolf | Oct 2017 | A1 |
20180089501 | Terekhov | Mar 2018 | A1 |
20190180124 | Schindler | Jun 2019 | A1 |
20190218847 | Agam | Jul 2019 | A1 |
20200386605 | Oren | Dec 2020 | A1 |
Number | Date | Country |
---|---|---|
101293529 | Oct 2008 | CN |
1022214309 | Oct 2011 | CN |
102747919 | Oct 2012 | CN |
203102401 | Jul 2013 | CN |
103632146 | Mar 2014 | CN |
104234575 | Jun 2016 | CN |
205608459 | Sep 2016 | CN |
102014113572 | Mar 2016 | DE |
102015200518 | Jul 2016 | DE |
2395368 | Nov 2010 | EP |
2212498 | Jan 2014 | EP |
3135846 | Mar 2017 | EP |
2004295798 | Oct 2004 | JP |
2010133200 | Jun 2010 | JP |
4907732 | Jan 2012 | JP |
2012215555 | Nov 2012 | JP |
2013061273 | Apr 2013 | JP |
2014142288 | Aug 2014 | JP |
2017014801 | Jan 2017 | JP |
2006285484 | Oct 2019 | JP |
491232 | Jun 2014 | TW |
2012042043 | Apr 2012 | WO |
Entry |
---|
The International Bureau of WIPO, International Preliminary Report On Patentability, Oct. 24, 2019, pp. 1-12, International Application No. PCT/EP2018/059187. |
Japan Patent Office, Office Action, Notice of Reasons for Rejection, Jan. 5, 2022, pp. 1-6, Patent Application No. 2020-504444. |
Japan Patent Office Action, English Translation of the Office Action, English Translation of the Notice of Reasons for Rejection, Jan. 5, 2022, pp. 1-5, Patent Application No. 2020-504444. |
Akamatsu et al., Development of a Person Counting System Using a 3D Laser Scanner, Proceedings of The 2014 IEEE International Conference On Robotics and Biomimetics, Dec. 5-10, 2014, pp. 1983-1988, Bali, Indonesia. |
Nishida et al., Development of Intelligent Automatic Door System, 2014 IEEE International Conference On Robotics & Automation (ICRA), May 31-Jun. 7, 2014, pp. 6368-6374. |
European Patent Office, International Search Report, Jul. 31, 2018, pp. 1-4. |
International Searching Authority, European Patent Office, Written Opinion of the International Searching Authority, International Application No. PCT/EP2018/059187, Jul. 31, 2018, pp. 1-8. |
European Patent Office, Extended European Search Report, Application No. EP 17165848, Oct. 24, 2017, pp. 1-9. |
Chinese Office Action Dated Mar. 23, 2023 With English Translation Appended Thereto, Application Serial No. 201880038289.2; Applicant is Bea Sa, Title Sensor for Controlling an Automatic Door. |
Chinese Office Action Dated Aug. 15, 2023 With English Translation Appended Thereto, Application Serial No. 201880038289.2; Applicant is Bea Sa, Title Sensor for Controlling an Automatic Door. |
Number | Date | Country | |
---|---|---|---|
20210011160 A1 | Jan 2021 | US |