The present disclosure relates to a warning device, a warning system, a warning method and a warning program.
There has been proposed a device including an area information acquisition unit that acquires image information provided from an image capturing device, a storage unit that stores detection information as information for detection, a detection unit that detects a predetermined situation in a predetermined area based on the image information and the detection information, a notification unit that notifies of the detection of the predetermined situation when the predetermined situation is detected, an inference unit that infers a cause of the predetermined situation, and an update unit that updates the detection information when the detection information does not include information indicating the cause of the predetermined situation (see Patent Reference 1, for example). This device detects the predetermined situation based on the image information and the detection information stored in the storage unit and issues a warning to a person.
However, the above-described conventional device just anticipates the occurrence of contact between a person and an object, and thus has a problem in that reliability of the warning is lowered by frequent issuance of erroneous warnings in a room, a construction site, or the like in which objects are placed in disorder.
An object of the present disclosure is to provide a warning device, a warning system, a warning method and a warning program that make it possible to issue warnings with high reliability.
A warning device in the present disclosure includes processing circuitry to estimate movement information including a position, a moving direction and a moving speed of an object person based on a first detection signal outputted from a vicinity sensor that performs sensing in regard to a vicinity of the object person; to estimate a movement anticipation area, as an area through which the object person is anticipated to move, based on the movement information; to provide vicinal situation information including a position of an obstacle, based on at least one of the first detection signal and previously acquired map information; to estimate condition of the object person based on a second detection signal outputted from a person sensor that senses at least one of the condition and voice of the object person; and to make a judgment on whether the obstacle is a warning object or not based on the vicinal situation information, the movement anticipation area, and the condition of the object person, and to output a warning signal when the obstacle is a warning object. The processing circuitry adjusts the movement anticipation area based on the condition of the object person, the processing circuitry estimates a wobble level representing magnitude of a wobble of the object person when walking, based on the second detection signal, and the processing circuitry widens the movement anticipation area with an increase in the wobble level.
A warning method in the present disclosure is a method to be executed by a warning device. The warning method includes estimating movement information including a position, a moving direction and a moving speed of an object person based on a first detection signal outputted from a vicinity sensor that performs sensing in regard to a vicinity of the object person; estimating a movement anticipation area, as an area through which the object person is anticipated to move, based on the movement information; providing vicinal situation information including a position of an obstacle, based on at least one of the first detection signal and previously acquired map information; estimating condition of the object person based on a second detection signal outputted from a person sensor that senses at least one of the condition and voice of the object person; and making a judgment on whether the obstacle is a warning object or not based on the vicinal situation information, the movement anticipation area, and the condition of the object person, and outputting a warning signal when the obstacle is a warning object. The movement anticipation area is adjusted based on the condition of the object person, when estimating the movement anticipation area. A wobble level representing magnitude of a wobble of the object person when walking is estimated based on the second detection signal, when estimating the condition of the object person. The movement anticipation area is widened with an increase in the wobble level, when estimating the movement anticipation area.
According to the present disclosure, warnings with high reliability can be issued.
The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present invention, and wherein:
A warning device, a warning system, a warning method and a warning program according to each embodiment will be described below with reference to the diagrams. The following embodiments are just examples and it is possible to appropriately combine embodiments and appropriately modify each embodiment.
The warning system 10 includes the warning device 1, a vicinity sensor 101, a person sensor 111 and a warning issuance device 117. The warning system 10 may include an actuator 118 that acts on an object person's body. The warning system 10 issues a warning with high reliability for preventing a fall or a drop to the object person as a person walking in an object area such as a factory facility or a construction site, for example.
The vicinity sensor 101 is a device that performs sensing in regard to the vicinity of the object person. The vicinity sensor 101 is, for example, a LiDAR (Light Detection and Ranging), a camera, or a combination of a LiDAR and a camera. The vicinity sensor 101 may include a microphone as a sound collection unit. The vicinity sensor 101 is placed at a position from where the whole of the vicinity of the object person can be looked over. The vicinity sensor 101 is arranged at a position where surrounding environment of the object person can be sensed. The vicinity sensor 101 is desired to be placed at a position from where the whole of the vicinity of the object person can be looked over, such as on the parietal region or the neck of the object person or on both of the parietal region and the neck.
The person sensor 111 is a device that performs sensing of movement of the object person. The person sensor 111 is, for example, a wearable sensor attached to the object person. The person sensor 111 can include, for example, a sensor (e.g., six-axis acceleration sensor) that detects the movement and acceleration of a person's parietal region, waist, foot or the like. The person sensor 111 can include a device (e.g., device in the shape of eyeglasses) that detects a person's line of sight by photographing the person's eyes. The person sensor 111 can include, for example, a microphone that is set in the vicinity of the mouth of the object person and detects voice of the object person.
The warning issuance device 117 notifies the object person of a warning by one or more of the following means: sound, light (e.g., lighting up of a lamp), displaying (e.g., a display) and vibration based on a received warning signal. The warning issuance device 117 may include a vibrating device provided on a helmet worn by the object person.
The actuator 118 is, for example, an exoskeleton device or a muscle stimulation device that is attached to the object person's body and regulates the movement of the object person's body. The actuator 118 is a device that applies force or stimulation to the object person in order to separate the person from an obstacle.
As shown in
The person position estimation unit 105 estimates movement information D5 including a position, a moving direction and a moving speed of the object person, based on a first detection signal D1 outputted from the vicinity sensor 101 sensing the vicinity of the object person. The person position estimation unit 105 estimates the position of the object person in a vicinal area map by analyzing three-dimensional point cloud information obtained by the LiDAR, image information obtained by the camera, or both of the three-dimensional point cloud information and the image information and using a method like SLAM (Simultaneous Localization and Mapping). It is also possible for the person position estimation unit 105 to calculate the moving direction and the moving speed of the position of the object person based on the change in the position of the object person in a time series.
The movement anticipation unit 114 estimates a movement anticipation area D14, as an area through which the object person is anticipated to move, based on the movement information D5.
The vicinal situation provision unit 102 provides vicinal situation information D2 including the position of an obstacle (e.g., a step such as a level difference, a hole, a slope or the like), based on at least one of the first detection signal D1 outputted from the vicinity sensor 101 and map information previously acquired and stored. The vicinal situation provision unit 102 detects a warning object, as an obstacle (e.g., step, hole, slope or the like) that can cause the object person to fall or drop, by analyzing the three-dimensional point cloud information obtained by the LiDAR of the vicinity sensor 101, the image information obtained by the camera of the vicinity sensor 101, or both of the three-dimensional point cloud information and the image information, for example. It is also possible to use the previously acquired map information as information indicating the obstacle that can cause the object person to fall or drop.
The condition estimation unit 201 estimates condition of the object person based on a second detection signal D11 outputted from the person sensor 111 that senses at least one of the condition and the voice of the object person.
The risk estimation unit 116 makes a judgment on whether the obstacle is a warning object or not based on the vicinal situation information D2, the movement anticipation area D14, and the condition of the object person outputted from the condition estimation unit 201, and outputs a warning signal D16 when the obstacle is a warning object.
The movement anticipation unit 114 adjusts the movement anticipation area D14 based on the condition of the object person outputted from the condition estimation unit 201.
In the first embodiment, the walking estimation unit 112 of the condition estimation unit 201 estimates a wobble level D12a representing magnitude of a wobble of the object person when walking, based on the second detection signal D11. For example, the walking estimation unit 112 can compare behavior of human body parts obtained by the acceleration sensor with normal-time behavior information held by the walking estimation unit 112 itself and calculate the wobble level based on magnitude of the difference between the behavior and the normal-time behavior information. It is desirable for the movement anticipation unit 114 to widen the movement anticipation area D14 with the increase in the wobble level D12a.
In the first embodiment, the walking estimation unit 112 estimates a foot elevation level D12b of the object person when walking based on the second detection signal D11. For example, the walking estimation unit 112 calculates the foot elevation level when walking based on an output signal from the acceleration sensor attached to a foot of the object person as the person sensor 111. It is desirable for the risk estimation unit 116 to make the judgment on whether the obstacle is a warning object or not based on the vicinal situation information D2, the movement anticipation area D14, and the foot elevation level D12b of the object person.
It is permissible even if the walking estimation unit 112 executes only one of the estimation of the wobble level D12a of the object person when walking and the estimation of the foot elevation level D12b of the object person when walking.
The sight line estimation unit 113 estimates a fixation point D13 indicating a position at which the object person is gazing, based on the second detection signal D11 outputted from the person sensor 111. For example, the sight line estimation unit 113 calculates what present region the object person is gazing at, that is, the fixation point as the position the object person is currently gazing at, based on the result of the sight line detection by the person sensor 111. It is desirable for the movement anticipation unit 114 to narrow the movement anticipation area D14 with the decrease in the degree of spreading of the distribution of the fixation point D13. Further, the risk estimation unit 116 can regard an obstacle overlapping with the fixation point D13 as not being a warning object when making the judgment on whether the obstacle is a warning object or not. That is, the risk estimation unit 116 can regard an obstacle situated at the same position as the fixation point D13 as not being a warning object.
The emotion estimation unit 115 estimates an emotion level D15 indicating the degree of excitement of the object person, based on a voice signal according to the voice of the object person in the second detection signal D11 outputted from the person sensor 111. For example, the emotion estimation unit 115 calculates present emotional condition (e.g., the emotion level such as an anger level or an impatience level) of the object person based on voice obtained by the microphone of the person sensor 111. The emotion level can be estimated based on the volume of the voice of the person, the pitch (frequency) of the voice, or the like. It is desirable for the risk estimation unit 116 to regard all obstacles in the movement anticipation area D14 as warning objects when the emotion level D15 exceeds a previously set threshold level. For example, when the emotion level D15 exceeds the previously set threshold level, the risk estimation unit 116 may regard all obstacles in the movement anticipation area D14, including the obstacle overlapping with the fixation point D13, as warning objects.
The noise detection unit 103 calculates the level of noise in the vicinity by detecting a noise signal D3 obtained by the microphone in the second detection signal D11 outputted from the person sensor 111. The risk estimation unit 116 is capable of determining the warning objects based on the vicinal situation information D2, the movement anticipation area D14, the condition of the object person outputted from the condition estimation unit 201, and the noise level.
The free space estimation unit 104 estimates a free space representing a region in which the object person can move, based on the first detection signal D1 outputted from the vicinity sensor 101. That is, the free space estimation unit 104 calculates the free space, as a movable region in which the person can move by walking, by detecting a flat floor, a step the person can walk up and down, and so forth based on the three-dimensional point cloud information obtained by the LiDAR of the vicinity sensor 101, the image information obtained by the camera of the vicinity sensor 101, or both of the three-dimensional point cloud information and the image information. The movement anticipation unit 114 is capable of adjusting the movement anticipation area D14 based on the free space.
The movement anticipation unit 114 calculates the movement anticipation area D14 indicating anticipated positions of the object person in the future, based on the movement information D5 including the position, the moving direction and the moving speed of the object person and acquired from the person position estimation unit 105. Further, errors in the anticipated positions become large (wide-range distribution) when the wobble level of the object person is high, and become small (narrow-range distribution) when the wobble level is low. On the assumption that the person moves in the direction of the gaze, distribution information regarding the fixation point D13 is used by use of the fixation points D13 acquired from the sight line estimation unit 113. The errors in the anticipated positions are large if the distribution of the fixation point D13 is wide, and are small if the distribution of the fixation point D13 is narrow. Further, when information on the person's movable region based on the free space D4 outputted from the free space estimation unit 104 indicates a shape like a passage, it can be inferred that the person moves along the passage, and thus it is desirable for the movement anticipation unit 114 to correct the result of the movement anticipation.
The risk estimation unit 116 judges whether or not the person's movement anticipation area D14 acquired from the movement anticipation unit 114 overlaps with an obstacle in the vicinity obtained by the vicinal situation provision unit 102. When an obstacle such as a step or a hole overlaps with the fixation point acquired from the sight line estimation unit 113, the object person has already recognized the obstacle, and thus the risk estimation unit 116 excludes the visually recognized obstacle from the warning objects. The obstacle having already been visually recognized means, for example, that the fixation point has overlapped with the same obstacle for a period longer than or equal to a predetermined set time. To overlap with something can mean not only to totally overlap with something but also to partially overlap with something.
Further, the risk estimation unit 116 may refer to the noise in the vicinity obtained by the noise detection unit 103 in the judgment on whether the obstacle is a warning object or not. It has generally been known that attentiveness in a person's visual field deteriorates (e.g., the effective visual field becomes narrower) in an environment with loud noise. When the noise level obtained based on the noise signal D3 outputted from the noise detection unit 103 exceeds a noise set value as a previously set threshold value, the risk estimation unit 116 may regard not only obstacles in the central visual field of the object person but also obstacles existing in the peripheral visual field of the object person as the warning objects irrespective of the position of the fixation point.
Furthermore, the risk estimation unit 116 may refer to the emotion level D15 outputted from the emotion estimation unit 115 in the judgment on whether the obstacle is a warning object or not. It has been known that the attentiveness and cognitive ability deteriorate, in comparison with those in normal times, when the person's emotional condition is anger, impatience or the like. Further, the person's emotion level can be estimated based on the voice of the person. Thus, the risk estimation unit 116 may regard obstacles in the movement anticipation area as warning objects irrespective of the position of the fixation point when the emotion level D15 exceeds the previously set threshold level.
The functions of the warning device 1 may be implemented by processing circuitry. The processing circuitry can be either dedicated hardware or the processor 15 that executes the warning program stored in the memory 16. The processor 15 can be any one of a processing device, an arithmetic device, a microprocessor, a microcomputer and a DSP (Digital Signal Processor).
In the case where the processing circuitry is dedicated hardware, the processing circuitry is an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or the like, for example.
It is also possible to implement part of the warning device 1 by dedicated hardware and other part by software or firmware. As above, the processing circuitry is capable of implementing the above-described functions by hardware, software, firmware or a combination of some of these means.
Subsequently, the movement anticipation unit 114 adjusts the movement anticipation area based on the wobble level D12a (step S104).
Subsequently, the movement anticipation unit 114 receives the fixation point D13 from the sight line estimation unit 113 (step S105) and adjusts the movement anticipation area based on the distribution range of the fixation point and its width, the position of the fixation point, and the wobble level D12a.
Subsequently, the movement anticipation unit 114 receives the free space D4 representing the movable region of the person from the free space estimation unit 104 (step S107) and adjusts the movement anticipation area based on the movable region (step S108).
The risk estimation unit 116 receives the movement anticipation area D14 from the movement anticipation unit 114 (step S201), receives the vicinal situation information D2 from the vicinal situation provision unit 102 (step S202), receives the foot elevation level D12b, the fixation point D13 and the emotion level D15 as the condition of the object person from the condition estimation unit 201 (steps S203-S205), receives the noise level from the noise detection unit (step S206), and if the emotion level is less than or equal to the predetermined threshold level and the noise level is less than or equal to the noise set value (NO in step S207 and NO in step S209), regards obstacles existing in the movement anticipation area and not visually recognized as the warning objects as shown in
If the emotion level D15 exceeds the predetermined threshold level (YES in step the S207), the risk estimation unit 116 regards obstacles existing in the movement anticipation area as the warning objects as shown in
If the noise represented by the noise signal D3 exceeds the predetermined noise set value (YES in the step S209), the risk estimation unit 116 regards obstacles existing in the movement anticipation area as the warning objects as shown in
As described above, according to the first embodiment, the free space D4 representing the movable region, the wobble level D12a, and the fixation point D13 are used in addition to the information such as the position and the moving speed of the person in order to increase the accuracy of the movement anticipation of the person, by which the accuracy of the movement anticipation area of the person can be increased.
Further, according to the first embodiment, the condition of the person (e.g., the foot elevation level D12b, the fixation point D13 and the emotion level D15) and the noise level are used for the judgment on whether the obstacle is a warning object or not, and thus obstacles as the warning objects can be determined appropriately and the occurrence of excessively frequent warnings can be avoided.
1-7: warning device, 10, 20, 30, 40, 50, 60, 70: warning system, 101: vicinity sensor, 102: vicinal situation provision unit, 103: noise detection unit, 104: free space estimation unit, 105: person position estimation unit, 111: person sensor, 112: walking estimation unit, 113: sight line estimation unit, 114: movement anticipation unit, 115: emotion estimation unit, 116: risk estimation unit, 117: warning issuance device, 118: actuator, 201-207: condition estimation unit.
This application is a continuation application of International Application No. PCT/JP2022/017113 having an international filing date of Apr. 5, 2022.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2022/017113 | Apr 2022 | WO |
Child | 18890161 | US |