The present disclosure relates to elevator systems and, in particular, to an elevator pit safety net system of an elevator system.
In an elevator system, a hoistway is built into a building and an elevator car travels up and down along the hoistway to arrive at landing doors of different floors of the building. The movement of the elevator is driven by a machine that is controlled by a controller according to instructions received from users of the elevator system. An elevator pit is the space between the hoistway's lowest landing door and the ground at the bottom of the hoistway. The elevator pit typically includes a concrete base slab and certain mechanisms of the elevator system and is typically bordered by four walls. The elevator pit can be accessed by authorized personnel (i.e., a service technician) via a pit ladder. The elevator car should generally be removed from the elevator pit and the elevator system should be non-operative while anyone is accessing the elevator pit, although there are some maintenance procedures requiring the elevator car to be moved while a mechanic is in the elevator pit.
According to an aspect of the disclosure, a safety net system is provided for an elevator system that includes an elevator pit. The safety net system includes a sensor and a processor. The sensor is arranged in a plane along a bottom of the elevator pit and is configured to perform sensing to sense an object disposed along the plane and to generate data corresponding to results of the sensing. The processor is operably coupled to the sensor and is configured to analyze the data and to determine whether the data is indicative of a person in the elevator pit based on analysis results.
In accordance with additional or alternative embodiments, the sensor is a LiDAR sensor.
In accordance with additional or alternative embodiments, the sensor is a millimeter waver RADAR sensor.
In accordance with additional or alternative embodiments, the sensor is an RGBD camera.
In accordance with additional or alternative embodiments, the sensor is one of a LiDAR sensor, a RADAR sensor or a camera.
In accordance with additional or alternative embodiments, the sensor is disposed in a corner of the elevator pit and is configured to sense a two-dimensional (2D) plane extending away from the corner along the bottom of the elevator pit.
In accordance with additional or alternative embodiments, one or more additional sensors are arranged in the plane and are configured to perform sensing to sense the object and to generate additional data corresponding to results of the sensing. The one or more additional sensors are disposed in one or more other corners of the elevator pit and are oriented transversely with respect to the sensor. The processor is operably coupled to the sensor and the one or more additional sensors and is configured to analyze the data generated by the sensor and the additional data generated by the one or more additional sensors and to determine whether the data and the additional data is indicative of a person in the elevator pit based on analysis results.
In accordance with additional or alternative embodiments, at least one of the one or more additional sensors are non-coplanar with respect to the sensor.
In accordance with additional or alternative embodiments, the sensor is configured to generate point cloud data from one or more sensing operations and the processor is configured to analyze the point cloud data from the one or more sensing operations and to determine whether the point cloud data from the one or more sensing operations is indicative of the person in the elevator pit.
According to an aspect of the disclosure, an elevator system is provided and includes an elevator pit and a safety net system. The safety net system includes a sensor and a processor. The sensor is arranged in a plane along a bottom of the elevator pit and is configured to perform sensing to sense an object disposed along the plane and to generate data corresponding to results of the sensing. The processor is operably coupled to the sensor and is configured to analyze the data and to determine whether the data is indicative of a person in the elevator pit based on analysis results.
In accordance with additional or alternative embodiments, the sensor is a LiDAR sensor.
In accordance with additional or alternative embodiments, the sensor is a millimeter waver RADAR sensor.
In accordance with additional or alternative embodiments, the sensor is an RGBD camera.
In accordance with additional or alternative embodiments, the sensor is one of a LiDAR sensor, a RADAR sensor or a camera.
In accordance with additional or alternative embodiments, the sensor is disposed in a corner of the elevator pit and is configured to sense a two-dimensional (2D) plane extending away from the corner along the bottom of the elevator pit.
In accordance with additional or alternative embodiments, one or more additional sensors are arranged in the plane and are configured to perform sensing to sense the object and to generate additional data corresponding to results of the sensing. The one or more additional sensors are disposed in one or more other corners of the elevator pit and are oriented transversely with respect to the sensor. The processor is operably coupled to the sensor and the one or more additional sensors and is configured to analyze the data generated by the sensor and the additional data generated by the one or more additional sensors and to determine whether the data and the additional data is indicative of a person in the elevator pit based on analysis results.
In accordance with additional or alternative embodiments, at least one of the one or more additional sensors are non-coplanar with respect to the sensor.
In accordance with additional or alternative embodiments, the sensor is configured to generate point cloud data from one or more sensing operations and the processor is configured to analyze the point cloud data from the one or more sensing operations and to determine whether the point cloud data from the one or more sensing operations is indicative of the person in the elevator pit.
According to an aspect of the disclosure, a method of operating a safety net system of an elevator system is provided. The method includes sensing in at least one direction along a plane defined along a bottom of an elevator pit for an object disposed along the plane, generating data corresponding to results of the sensing, analyzing the data and determining whether the data is indicative of a person standing in the bottom of the elevator pit based on results of the analyzing.
In accordance with additional or alternative embodiments, the determining includes an execution of a machine-learning algorithm that improves an accuracy of the determining over time.
Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed technical concept. For a better understanding of the disclosure with the advantages and the features, refer to the description and to the drawings.
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts:
In the elevator industry, multiple monitors and sensors are provided to monitor various parts and components of an elevator system. Particularly, critical areas to monitor are the elevator pit, which service technicians and mechanics enter to perform maintenance and service tasks, and the pit ladder, which service technicians and mechanics use to access the elevator pit and to stand on during some operations. A cost-effective way of detecting a person, such as a service technician or a mechanic, standing in the elevator pit or on the pit ladder of an elevator system is therefore needed. Such a detection system needs to be easy to install and adjust and needs to require minimal service and maintenance. The detection system must also have high detection performance with low false positive and negative outcomes. In addition, when a detection system is installed, it is important that there be a verification process in place to ensure the detection system is operating properly and can be trusted to detect service technicians and mechanics in hazardous locations in the elevator pit and on the pit ladder. This verification process should be simple to initiate and use and effective to thereby provide installation personnel adequate data to allow them to confidently turn over the detection system.
As will be described below, a safety net system is provided for use with an elevator system. The safety net system includes a sensor, such as a single LiDAR sensor, which is located on a back side of the pit ladder to monitor a single plane behind the pit ladder using two-dimensional (2D) sensing. The space behind the pit ladder is mandated to be free of obstructions to allow the service technician's or mechanic's foot to have adequate space. The sensed plane spans an entire length of the pit ladder approximately 50-100 mm behind the ladder rungs and across the full width of the ladder. The toe of the service technician's or mechanic's boots would be easily captured in the point cloud of the sensor (i.e., the LiDAR sensor) and data processing of the point cloud could identify the points and trigger the detection condition indicating that someone is standing on the pit ladder. Additionally or alternatively, the safety net system can include a sensor, such as a single LiDAR sensor, which is located in a corner of the elevator pit to monitor the elevator pit using a 90-degree field of view in a single 2D plane about 18-24″ above the floor. As above, a service technician's or mechanic's body would be easily captured in the point cloud of the sensor (i.e., the LiDAR sensor) and data processing of the point cloud could identify the points and trigger the detection condition indicating that someone is standing in the elevator pit. Multiple sensors for each case can be used.
In an operation of the safety net system, a learned profile is generated by analyzing statistical variations and trends in range vs. angle data of the sensor results. After this learning phase, the LiDAR sensor scans the region at an update rate (e.g., 10 scans/second) and compares its current data with the learned background data. Hit points that differ from the learned background data are deemed as potential indicators of persons. This type of algorithm is referred to as a 2D classifying approach and will trigger human detection actions based on the number of observed hit points.
With reference to
The tension member 107 engages the machine 111, which is part of an overhead structure of the elevator system 101. The machine 111 is configured to control movement between the elevator car 103 and the counterweight 105. The position reference system 113 may be mounted on a fixed part at the top of the elevator shaft 117, such as on a support or guide rail, and may be configured to provide position signals related to a position of the elevator car 103 within the elevator shaft 117. In other embodiments, the position reference system 113 may be directly mounted to a moving component of the machine 111, or may be located in other positions and/or configurations as known in the art. The position reference system 113 can be any device or mechanism for monitoring a position of an elevator car and/or counterweight, as known in the art. For example, without limitation, the position reference system 113 can be an encoder, sensor, or other system and can include velocity sensing, absolute position sensing, etc., as will be appreciated by those of skill in the art.
The controller 115 may be located, as shown, in a controller room 121 of the elevator shaft 117 and is configured to control the operation of the elevator system 101, and particularly the elevator car 103. It is to be appreciated that the controller 115 need not be in the controller room 121 but may be in the hoistway or other location in the elevator system. For example, the controller 115 may provide drive signals to the machine 111 to control the acceleration, deceleration, leveling, stopping, etc. of the elevator car 103. The controller 115 may also be configured to receive position signals from the position reference system 113 or any other desired position reference device. When moving up or down within the elevator shaft 117 along guide rail 109, the elevator car 103 may stop at one or more landings 125 as controlled by the controller 115. Although shown in a controller room 121, those of skill in the art will appreciate that the controller 115 can be located and/or configured in other locations or positions within the elevator system 101. In one embodiment, the controller 115 may be located remotely or in a distributed computing network (e.g., cloud computing architecture). The controller 115 may be implemented using a processor-based machine, such as a personal computer, server, distributed computing network, etc.
The machine 111 may include a motor or similar driving mechanism. In accordance with embodiments of the disclosure, the machine 111 is configured to include an electrically driven motor. The power supply for the motor may be any power source, including a power grid, which, in combination with other components, is supplied to the motor. The machine 111 may include a traction sheave that imparts force to tension member 107 to move the elevator car 103 within elevator shaft 117.
The elevator system 101 also includes one or more elevator doors 104. The elevator door 104 may be integrally attached to the elevator car 103 or the elevator door 104 may be located on a landing 125 of the elevator system 101, or both. Embodiments disclosed herein may be applicable to both an elevator door 104 integrally attached to the elevator car 103 or an elevator door 104 located on a landing 125 of the elevator system 101, or both. The elevator door 104 opens to allow passengers to enter and exit the elevator car 103.
With continued reference to
With continued reference to
The processor 320 includes a processing unit, a memory and an input/output (I/O) unit by which the processor 320 is communicative with the sensor 310 and at least the controller 115 (see
In accordance with embodiments, the sensor 310 can include or be provided as one or more of a light detection and ranging or a laser imaging, detection, and ranging (LiDAR) sensor, a radio detection and ranging (RADAR) sensor and/or a camera. In accordance with further embodiments, the sensor 310 can be provided as one or more of a 2D LiDAR sensor, a millimeter wave RADAR sensor and/or a red, green, blue, depth (RGBD) camera. In accordance with still further embodiments, the sensor 310 can be provided as plural sensors including a combination of one or more sensor types listed herein.
In the exemplary case of the sensor 310 being a 2D LiDAR sensor, the sensor 310 is configured to sense the plane P as a 2D plane along an entire length L1 (see
That is, where the elevator pit ladder 205 includes rungs 2053, the object being sensed or detected can be a toe of a shoe of a person standing on one of the rungs 2053, the point cloud data 401 can include hit points 402 at which different parts of the toe of the shoe intersects the plane P, additional points 403 at which no portion of any object intersects the plane P and false points 404 at which portions of foreign objects or debris (i.e., a feather or dust floating into the plane P) intersect the plane P. The processor 320 analyzes each of the hit points 402, the additional points 403 and the false points 404. The processor 320 identifies the hit points 402 as hit points 402 from their characteristic shape and their grouping, the processor 320 identifies the additional points 403 as additional points 403 from their signal match to a baseline data set taken when the elevator pit 201 is known to be empty or, more generally, to have certain physical characteristics, and the processor 320 identifies the false points 404 as false points 404 from their characteristic shapes or lack thereof and their grouping or lack thereof. The processor 320 then distinguishes the hit points 402 from the additional points 403 and the false points 404 and determines that, when the hit points 402 of the point cloud data 401 are identified and distinguished, the hit points 402 are indicative of the toe of the shoe intersecting the plane P and thus that a person is likely to be standing on one of the rungs 2053 of the elevator pit ladder 205. The processor 320 can then communicate that finding with at least the controller 115 of the elevator system 101 so that the controller 115 can act, such as by preventing the elevator car 103 from entering the elevator pit 201.
Since the processor 320 can identify and distinguish the hit points 402 from the additional points 403, an incidence of false negative determinations of the safety net system 301 is reduced. Likewise, since the processor 320 can identify and distinguish the hit points 402 from the false points 404, an incidence of false positive determinations of the safety net system 301 is also reduced. When the executable instructions stored on the memory unit of the processor 320 include a machine-learning algorithm, the ability of the processor 320 to identify and distinguish the hit points 402 from the additional points 403 and the false points 404 can improve over time and the incidence of the false negative and false positive determinations of the safety net system 301 can be continually reduced over time in a corresponding manner.
With reference to
While the image processing described above relates to a single frame of points in a single scan point cloud, the processor 320 can also process successive scans to help classify points as hit points 402 versus additional points 403 or false points 404 by determining how persistent the points are and if they are moving together as one would expect in valid hit points associated with mechanics. As such, the generating of the data of block 602 could include generating data of multiple scans of point clouds, where the term “data” can relate to a continuously or semi-continuously updated set of point cloud scans. In these or other cases, the analyzing of block 603 and the determining of block 604 can include image processing and video processing.
With reference back to
The processor 720 includes a processing unit, a memory and an input/output (I/O) unit by which the processor 720 is communicative with the sensor 710 and at least the controller 115 (see
In accordance with embodiments, the sensor 710 can include or be provided as one or more of a light detection and ranging or a laser imaging, detection, and ranging (LiDAR) sensor, a radio detection and ranging (RADAR) sensor and/or a camera. In accordance with further embodiments, the sensor 710 can be provided as one or more of a 2D LiDAR sensor, a millimeter wave RADAR sensor and/or a red, green, blue, depth (RGBD) camera. In accordance with still further embodiments, the sensor 710 can be provided as plural sensors including a combination of one or more sensor types listed herein. A description of plural sensors will be provided below.
In the exemplary case of the sensor 710 being a 2D LiDAR sensor, the sensor 710 is disposed in a corner 2011 of the elevator pit 201 and is configured to sense the plane P′ as a 2D plane extending away from the corner 2011 along a substantial portion of the area of the bottom of the elevator pit 201. The plane P′ can be about 18-24″ above the base 202. In these or other cases, the sensor 710 is configured to generate the data as point cloud data 730 using a single scan for image processing, multiple scans for image processing and/or multiple successive or continuous scans for video processing and the processor 720 is configured to analyze the point cloud data 730 and to determine whether the point cloud data 730 is indicative of the person in the elevator pit 201.
That is, the object being sensed or detected can be a person in the elevator pit 201 and the point cloud data 730 can include hit points 731 at which different parts of the person intersect the plane P′, additional points 732 at which no portion of the person or other object intersects the plane P′ and false points 733 at which portions of foreign objects or debris (i.e., a feather or dust floating into the plane P′) intersect the plane P′. The processor 720 analyzes each of the hit points 731, the additional points 732 and the false points 733. The processor 720 identifies the hit points 731 as hit points 731 from their characteristic shape and their grouping, the processor 720 identifies the additional points 732 as additional points 732 from their signal match to a baseline data set taken when the elevator pit 201 is known to be empty or, more generally, to have certain physical characteristics, and the processor 720 identifies the false points 733 as false points 733 from their characteristic shapes or lack thereof and their grouping or lack thereof. The processor 720 then distinguishes the hit points 731 from the additional points 732 and the false points 733 and determines that, when the hit points 731 of the point cloud data 730 are identified and distinguished, the hit points 731 are indicative of the portion of the person intersecting the plane P′ and thus that a person is likely to be standing in the elevator pit 201. The processor 720 can then communicate that finding with at least the controller 115 of the elevator system 101 so that the controller 115 can act, such as by preventing the elevator car 103 from entering the elevator pit 201, to avoid an unsafe condition.
Since the processor 720 can identify and distinguish the hit points 731 from the additional points 732, an incidence of false negative determinations of the safety net system 701 is reduced. Likewise, since the processor 720 can identify and distinguish the hit points 731 from the false points 733, an incidence of false positive determinations of the safety net system 701 is also reduced. When the executable instructions stored on the memory unit of the processor 720 include a machine-learning algorithm, the ability of the processor 720 to identify and distinguish the hit points 731 from the additional points 732 and the false points 733 can improve over time and the incidence of the false negative and false positive determinations of the safety net system 701 can be continually reduced over time in a corresponding manner.
With reference to
With reference to
While the image processing described above relates to a single frame of points in a single scan point cloud, the processor 720 can also process successive scans to help classify points as hit points 731 versus additional points 732 or false points 733 by determining how persistent the points are and if they are moving together as one would expect in valid hit points associated with mechanics. As such, the generating of the data of block 1002 could include generating data of multiple scans of point clouds, where the term “data” can relate to a continuously or semi-continuously updated set of point cloud scans. In these or other cases, the analyzing of block 1003 and the determining of block 1004 can include image processing and video processing
While the embodiments of
With reference to
After setup, the sensor 710 learns an ambient background in the elevator pit 201 by scanning for a predefined time (e.g., for about 30 seconds) and with various elevator car positions. A learned profile is then generated by the processor 720 through an analysis of statistical variations and trends in range vs. angle data as shown in
The 2D classifying approach can be re-executed periodically or in response to an external event. The periodic re-executions allow for changes in the elevator system 201 over time to be accounted for (i.e., degradations or damages to components, changes in components, etc.). The re-executions in response to an external event can be executed as needed, such as when the sensor 710 is bumped or moved and needs to be recalibrated.
With continued reference to
With reference to
The variance of multiple collected point clouds for a learning phase (for example, at one vertical car position) could generate a range of acceptance criteria. Examples include: a magnitude of the average variation across all angles in the field of view, a worst-case magnitude variation observed at any angle within the field of view, a drift or variation in point cloud range values at any angle that trends over the scanned learning phase of observed range values or a variation in point cloud signatures that could be traced to rotational variations of the sensor 710 during the learning phase.
As used herein, the term “variance” can be a discriminator for successful learning where there can be two types of data metrics useful for determining whether the learning phase was successful. These include a difference or error between learned results and a pre-determined idea of what is expected, such as an area of a learned background or noted items/objects in the sensor's field of view, and an observed variation in collected data as seen in successive scans which are not linked to any pre-determined idea of what was expected.
The operational methods associated with the graphs of
With reference to
In accordance with embodiments, the executing of the learning phase of block 1402 can be commanded via a display unit, which is communicatively coupled with the sensor, and the verifying of the successful installation of the sensor of block 1405 can include displaying an indication on the display unit.
The verifying of the successful installation of the sensor of block 1405 includes determining whether the background reading matches the reading associated with the known physical characteristics to a predefined degree (block 14051) and verifying the successful installation of the sensor in an event the background reading matches the reading associated with the known physical characteristics to the predefined degree (block 14052). Where the known physical characteristics are an area of the portion of the elevator pit, the predefined degree can be a relatively small percentage (i.e., less than about 1-5%) difference between the background reading and the area of the portion of the elevator pit. As shown in
With reference to
In accordance with embodiments, the executing of the learning phase of block 1502 can be commanded via a display unit, which is communicatively coupled with the sensor, and the verifying of the successful installation of the sensor of block 1505 can include displaying an indication on the display unit. The verifying of the successful installation of the sensor of block 1505 includes calculating a variance between the background signal and the signal associated with the known physical characteristics (block 15051), determining whether the variance is less than a predefined limit (block 15052) and verifying the successful installation of the sensor in an event the variance is less than the predefined limit (block 15053). The predefined limit can be some relatively small percentage of variance (i.e., about 1-5%). As shown in
With reference to
Technical effects and benefits of the present disclosure are the provision of a safety net system for an elevator system that uses a low-cost sensor, such as a LiDAR sensor, to cover a single-angle field of view (azimuth only, no need for elevation angle) in a 2D mode. Data processing and a detection determination is accomplished by a simple yet robust algorithm that could easily be remote or provided on the sensor itself.
The corresponding structures, materials, acts and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the technical concepts in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiments were chosen and described in order to best explain the principles of the disclosure and the practical application and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
While the preferred embodiments to the disclosure have been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the disclosure first described.
Number | Name | Date | Kind |
---|---|---|---|
6202797 | Skolnick | Mar 2001 | B1 |
7600613 | Kang | Oct 2009 | B2 |
7954606 | Tinone | Jun 2011 | B2 |
8556043 | Mangini | Oct 2013 | B2 |
10983210 | Wos | Apr 2021 | B2 |
11485608 | Tegtmeier | Nov 2022 | B2 |
11548761 | Oggianu | Jan 2023 | B2 |
11667494 | Kattainen | Jun 2023 | B2 |
20080084317 | Gakhar | Apr 2008 | A1 |
20080223667 | Tinone | Sep 2008 | A1 |
20120018256 | Mangini | Jan 2012 | A1 |
20160033334 | Zhevelev | Feb 2016 | A1 |
20190094358 | Wos | Mar 2019 | A1 |
20190322485 | Kattainen | Oct 2019 | A1 |
20200039784 | Oggianu | Feb 2020 | A1 |
20200130999 | Sun | Apr 2020 | A1 |