The present invention relates to a detection method, a detection system, and a program for detecting a situation of a person having an accessory.
Persons with physical disabilities may use various tools for assisting their motions in their daily life. For example, a visually impaired person uses a white cane as a walking assisting tool. A white cane is used for checking safety at the tip of the cane for a visually impaired person, collecting information required for walking, acknowledging that he/she is a visually impaired person by those around, and the like.
Meanwhile, a visually impaired person may be involved in a contact accident or a traffic accident. In particular, a white cane of a visually impaired person may come into contact with another person, a car, a bicycle, a train, or the like and may be involved in an accident. Therefore, when a white cane is used by a visually impaired person, there are may cases that a person does not hold the grip firmly in order not to be involved in an accident. As a result, an action that a white cane is separated from the hand of a visually impaired person may be caused.
Patent Literature 1: JP 2019-16348 A
Non-Patent Literature 1: Yadong Pan et & Shoji Nishimura, “Multi-Person Pose Estimation with Mid-Points for Human Detection under Real-World Surveillance”, The 5th Asian Conference on Pattern Recognition (ACPR 2019), 26-29 Nov. 2019
However, once a white cane is separated from the hand of a visually impaired person, it is difficult to pick it up thereafter. Accordingly, it is necessary to detect that a white cane is separated from the hand of a visually impaired person and that the visually impaired person is looking for a white cane, and to notify those around the person and assist the person.
In Patent Literature 1, a sensor for detecting acceleration is mounted on a white cane, and drop of the white cane is detected from the detected acceleration. However, it is difficult to detect that a white cane is separated from a visually impaired person more accurately only with the acceleration of the white cane, which may cause erroneous detection. Moreover, in that case, it is impossible to detect a motion of looking for an accessory such as a white cane by a person. This causes a problem that it is impossible to detect a situation that a person may be in trouble such as dropping a white cane or looking for a white cane. Such a problem may be caused in a situation of holding any accessory without being limited to a person having a white cane.
An object of the present invention is to provide a detection method, a detection system, and a program capable of solving the above-described problem, that is, a problem that it is impossible to detect a situation that a person having an accessory may be in trouble.
A detection method according to one aspect of the present invention is configured to include
detecting position information representing a position of a predetermined part of a person and a position of an accessory in a specific shape held by the person, and
based on the position information, detecting that the accessory is separated from the person.
Further, a detection system according to one aspect of the present invention is configured to include
a position detection means for detecting position information representing a position of a predetermined part of a person and a position of an accessory in a specific shape held by the person, and
a separation detection means for detecting that the accessory is separated from the person based on the position information.
Further, a program according to one aspect of the present invention is configured to cause an information processing device to realize
a position detection means for detecting position information representing a position of a predetermined part of a person and a position of an accessory in a specific shape held by the person, and
separation detection means for detecting that the accessory is separated from the person based on the position information.
With the configurations described above, the present invention can detect that an accessory is separated from a person with high accuracy.
A first exemplary embodiment of the present invention will be described with reference to
[Configuration]
A detection system of the present embodiment is used for detecting that a person P such as a visually impaired person releases a white cane W from a hand by dropping it or the like. Therefore, the detection system is used in a place where people visit such as a station, an airport, a shopping district, or a shopping mall. However, an object to be detected by the detection system is not limited to a white cane. Any accessory may be a detection object if it is an accessory in a specific shape held of by a predetermined part of the person P. For example, the detection system may be used for detecting glasses, a hat, a bag, or the like of the person P.
As illustrated in
The detection device 10 is configured of one or a plurality of information processing devices each having an arithmetic device and a storage device. As illustrated in
The position detection unit 11 (position detection means) acquires a captured image captured by the camera C. Then, the position detection unit 11 detects the person P shown in the captured image, and detects position information of a predetermined part of the person P. Specifically. the position detection unit 11 uses a posture estimation technique for detecting the skeleton of the person P as described in the Non-Patent Literature 1 to specify each part of the person P, and detects position information of each part. At that time, the position detection unit 11 uses a study model for detecting the skeleton of a person stored in the model storage unit 15 to detect position information of each part of the person P. As an example, as illustrated in the left drawing of
The position detection unit 11 detects position information of the white cane Was an accessory in a specific shape held by the person P from the captured image. For example, as illustrated in the left drawing of
The position detection unit 11 does not necessarily detect position information of each part of the person P by using the posture estimation technique for detecting the skeleton of the person P as described above. The position detection unit 11 may detect position information of each part of the person by means of any method. Also, the position detection unit 11 does not necessarily detect position information of the white cane W by the above-described method, and may perform it by any method. For example, the position detection unit 11 may detect each piece of position information by using a sensor attached to a predetermined part of the person P such as a wrist or a sensor mounted on the white cane W, without using captured images. Note that when the position detection unit 11 detects position information of another accessory rather than the white cane W, the position detection unit 11 may extract an accessory in a captured image based on predetermined shape information of the accessory and detect position information of the accessory.
The separation detection unit 12 (separation detection means) calculates a distance D between a predetermined part of the person P and a predetermined part of the white cane W by using the position information of the person P and the position information of the white cane W detected as described above. Then, the separation detection unit 12 detects that the white cane W is separated from the person P based on the calculated distance D. For example, as illustrated in the right drawing of
However, the above-described method is an example. The separation detection unit 12 may detect that the white cane W is separated from the person P by another method. For example, the separation detection unit 12 may calculate a distance between the gravity center position of the person P and the gravity center position of the white cane W, and detect that the white cane W is separated from the person P according to such a distance.
The posture detection unit 14 (separation detection means) detects the posture of the person P after detecting that the white cane W is separated from the person P as described above. For example, the posture detection unit 14 acquires a captured image captured by the camera C, detects the person P shown in the captured image, and detects position information of a predetermined part of the person P. Specifically. the posture detection unit 14 uses a posture estimation technique for detecting the skeleton of the person P as described above to specify each part of the person P, and detects position information of each body part. For example, as illustrated in the left drawing of
As described above, when the notifying unit 15 (notifying means) detects that the white cane W is separated from the person P, the notifying unit 15 performs a notifying process to transmit notification information including that there is a person P who dropped the white board, to the information processing terminal UT of the surveillant U. At that time, the notifying unit 15 specifies the position of the camera C from the identification information of the camera C capturing the captured image from which the white cane W is separated from the person P is detected, and transmits the position information of the camera C as position information where the person P is present, to the information processing terminal UT as position information.
Further, the notifying unit 15 (second notifying means) detects the posture of the person P after detecting that the white cane W is separated from the person P as described above, and according to the posture and the motion, performs a notifying process (second notifying process) to transmit notification information to the information processing terminal UT of the surveillant U. For example, when the notifying unit 15 detects that the posture of the person P is a bending posture or detects that the person P moves to look for the white cane W, the notifying unit 15 transmits the fact that the person P is looking for the white cane W and position information that can be specified from the camera C that captured the captured image from which the fact is detected, to the information processing terminal UT as notification information.
Note that the notification information to be notified to the information processing terminal UT of the surveillant U by the notifying unit 15 is not limited to the information of the above-described content, and may be other information. Further, the notifying unit 15 does not necessarily notify notification information including the information that there is a person P who dropped the white cane W as described above to the information processing terminal UT. The notifying unit 15 may operate to notify only notification information including information that the person P is looking for the white cane W according to the posture of the person P thereafter.
[Operation]
Next, operation of the detection device 10 described above will be described with mainly reference to the flowchart of
Then, the detection device 10 calculates the distance between the person P and the white cane W, and detects that the white cane W is separated from the person P according to the distance (Yes at step S2, step S3). For example, as illustrated in
Then, the detection device 10 uses the captured image acquired after detecting that the white cane W is separated from the person P to detect the posture of the person P (step S4). For example, the detection device 10 uses a posture estimation technique for detecting the skeleton of the person P to specify each part of the person P, detects position information of each part, and detects the posture of the person P according to the position relationship between the parts. Then, as illustrated in
As described above, the present embodiment detects position information of a predetermined part of the person P and the white cane W, and detects that the white cane W is separated from the person P based on such position information. Therefore, it is possible to detect that the white cane W is separated from the person P accurately, and a prompt and appropriate assisting action can be taken with respect to the person P. Further, by detecting the posture of the person P who dropped the white cane W, it is possible to detect that the person P is looking for the white cane W accurately, and further, to take a prompt and appropriate assisting action for the person P.
<Modifications>
Next, another example of detecting that the person P is looking for the white cane W by the detection device 10 will be described with reference to the flowchart of
Note that the detection device 10 may not detect position information of each part of the person P and position information of the white cane W in a manner as described above. The detection device 10 may detect the person P with the white cane W by means of another method. For example, the detection device 10 may detect the person P with the white cane W by detecting the white cane W based on the shape and color characteristics of the white cane W and detecting the person P who is present near the white cane W from the feature amount of the object or motion. Further, in the present modification, the detection device 10 does not detect that the white cane W is separated from the person P such as the case where the person P dropped the white cane W, which is different from the above description.
Then, the detection device 10 uses the captured image acquired after detecting the person P having the white cane in hand to detect the posture of the person P (step S12). For example, the detection device 10 uses a posture estimation technique for detecting the skeleton of the person P to specify each part of the person P, detects position information of each part, and detects the posture of the person P according to the position relationship between the parts. Then, when the detection device 10 detects that the person P is in a bending posture as illustrated in the left drawing of
As described above, in the present embodiment, the person P having the white cane W is detected first, and from the posture of the person P, it is determined that the person P is looking for the white cane W. Therefore, it is possible to accurately detect that the person P may be in trouble like looking for an accessory such as the white cane W, and to take a prompt and appropriate assisting action for the person P.
Next, a second exemplary embodiment of the present invention will be described with reference to
First, a hardware configuration of a detection system 100 in the present embodiment will be described with reference to
The detection system 100 can construct, and can be equipped with, a position detection means 121 and a separation detection means 122 illustrated in
Note that
The detection system 100 executes the detection method illustrated in the flowchart of
As illustrated in
detect position information representing a position of a predetermined part of a person and a position of an accessory in a specific shape held by the person (step S101), and
based on the position information, detect that the accessory is separated from the person (step S102).
Since the present invention is configured as described above, the present invention detects position information of a predetermined part of a person and an accessory, and based on the position information, detects that the accessory is separated from the person. Therefore, it is possible to accurately detect that a white cane is separated from the person, that is, to accurately detect a situation that a person may be in trouble, and to take a prompt and appropriate assisting action for the person.
Note that the program described above can be supplied to a computer by being stored in a non-transitory computer-readable medium of any type. Non-transitory computer-readable media include tangible storage media of various types. Examples of non-transitory computer-readable media include magnetic storage media (for example, flexible disk, magnetic tape, and hard disk drive), magneto-optical storage media (for example, magneto-optical disk), a CD-ROM (Read Only Memory), a CD-R, a CD-R/W, and semiconductor memories (for example, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), a flash ROM, and a RAM (Random Access Memory)). Note that the program may be supplied to a computer by being stored in a transitory computer-readable medium of any type. Examples of transitory computer-readable media include electric signals, optical signals, and electromagnetic waves. A transitory computer-readable medium can be supplied to a computer via a wired communication channel such as a wire and an optical fiber, or a wireless communication channel.
While the present invention has been described with reference to the exemplary embodiments described above, the present invention is not limited to the above-described embodiments. The form and details of the present invention can be changed within the scope of the present invention in various manners that can be understood by those skilled in the art. Further, at least one of the functions of the position detection means 121 and the separation detection means 122 described above may be carried out by an information processing device provided and connected to any location on the network, that is, may be carried out by so-called cloud computing.
The whole or part of the exemplary embodiments disclosed above can be described as the following supplementary notes. Hereinafter, outlines of the configurations of a detection method, a detection system, and a program according to the present invention will be described. However, the present invention is not limited to the configurations described below.
A detection method comprising:
detecting position information representing a position of a predetermined part of a person and a position of an accessory in a specific shape held by the person; and
based on the position information, detecting that the accessory is separated from the person.
The detection method according to supplementary note 1, further comprising
detecting the position information of the predetermined part of the person and the accessory by detecting a skeleton of the person from a captured image in which the person is captured.
The detection method according to supplementary note 1 or 2, further comprising
detecting that the accessory is separated from the person by a preset distance or more.
The detection method according to any of supplementary notes 1 to 3, further comprising
detecting that the accessory is separated from the person for a preset time or more.
The detection method according to any of supplementary notes 1 to 4, further comprising
when detecting that the accessory is separated from the person, performing a preset notifying process.
The detection method according to any of supplementary notes 1 to 5, further comprising
detecting a posture of the person after it is detected that the accessory is separated from the person.
The detection method according to supplementary note 6, further comprising
detecting the posture of the person by detecting a skeleton of the person from a captured image in which the person is captured after it is detected that the accessory is separated from the person.
The detection method according to supplementary note 6 or 7, further comprising detecting a motion of the person on a basis of the posture of the detected person.
The detection method according to supplementary note 7 or 8, further comprising performing a preset second notifying process on a basis of the detected posture of the person.
The detection method according to any of supplementary notes 1 to 9, wherein
the accessory is a rod-shaped body having a predetermined length.
The detection method according to supplementary note 10, wherein
the accessory is a white cane.
A detection system comprising:
position detection means for detecting position information representing a position of a predetermined part of a person and a position of an accessory in a specific shape held by the person; and
separation detection means for detecting that the accessory is separated from the person based on the position information.
The detection system according to supplementary note 12, wherein
the position detection means detects the position information of the predetermined part of the person and the accessory by detecting a skeleton of the person from a captured image in which the person is captured.
The detection system according to supplementary note 12 or 13, wherein
the separation detection means detects that the accessory is separated from the person by a preset distance or more.
The detection system according to any of supplementary notes 12 to 14, wherein
the separation detection means detects that the accessory is separated from the person for a preset time or more.
The detection system according to any of supplementary notes 12 to 15, further comprising
notifying means for, when detecting that the accessory is separated from the person, performing a preset notifying process.
The detection system according to any of supplementary notes 12 to 16, further comprising
posture detection means for detecting a posture of the person after it is detected that the accessory is separated from the person.
The detection system according to supplementary note 17, wherein
the posture detection means detects the posture of the person by detecting a skeleton of the person from a captured image in which the person is captured after it is detected that the accessory is separated from the person.
The detection system according to supplementary note 17 or 18, wherein
the posture detection means detects a motion of the person on a basis of the posture of the detected person.
The detection system according to supplementary note 18 or 19, further comprising
second notifying means for performing a preset second notifying process on a basis of the detected posture of the person.
A computer-readable medium storing thereon a program for causing an information processing device to realize:
position detection means for detecting position information representing a position of a predetermined part of a person and a position of an accessory in a specific shape held by the person; and
separation detection means for detecting that the accessory is separated from the person based on the position information.
The computer-readable medium according to supplementary note 21, the medium storing thereon the program for causing the information processing device to further realize
notifying means for, when detecting that the accessory is separated from the person, performing a preset notifying process.
The computer-readable medium according to supplementary note 21, the medium storing thereon the program for causing the information processing device to further realize
posture detection means for detecting a posture of the person after it is detected that the accessory is separated from the person.
The computer-readable medium according to claim 21, the medium storing thereon the program for causing the information processing device to further realize
second notifying means for performing a preset second notifying process on a basis of the detected posture of the person.
A detection method comprising:
detecting a person having an accessory;
detecting a posture of the person thereafter; and
based on the detected posture of the person, detecting that the person takes a preset specific posture.
The detection method according to supplementary note Al, further comprising detecting the posture of the person by detecting a skeleton of the person from a captured image in which the person is captured.
The detection method according to supplementary note A1 or A2, further comprising based on the detected posture of the person, detecting that the person takes a posture to look for the accessory as the specific posture.
(Supplementary note A4)
The detection method according to any of supplementary notes A1 to A3, further comprising
when detecting that the person takes the specific posture, performing a preset notifying process.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/011763 | 3/17/2020 | WO |