This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2022-089044, filed on May 31, 2022, the disclosure of which is incorporated herein in its entirety by reference.
The present disclosure relates to an information processing device, a recording medium, and an information processing method.
A method of obtaining a region including an object for each object from an image including a plurality of objects and performing a process based on a relationship between the regions is known. For example, Patent Literature 1 (JP 2020-198053 A) discloses an information processing device that detects a region overlapping with an object and a region not overlapping with the object in a person and determines a certainty factor of a feature amount regarding an attribute grasped from each region.
Although the information processing device disclosed in JP 2020-198053 A focuses on a region overlapping with an object and a region not overlapping with the object in a person, there is room for further study on a method of processing information based on a third region obtained from a plurality of regions.
In view of the above-described problems, an object of the present disclosure is to provide an information processing device, a recording medium, and an information processing method capable of performing a process in consideration of a third region obtained from a plurality of regions detected from an image.
Exemplary features and advantages of the present invention will become apparent from the following detailed description when taken with the accompanying drawings in which:
The first example embodiment of the present disclosure will be described with reference to
The image acquisition unit 11 serves as an image acquisition means and acquires an image. The object detection unit 12 serves as an object detection means, and detects a first region including a first object and a second region including a second object from the acquired image. The first object is, for example, a person, an obstacle, a shield, or the like. The obstacle or the shield is, for example, a moving object or a building. The second object is a person different from the person detected as the first object or various objects carried by the person. The various objects carried by the person may include, for example, various objects classified as furniture, fruits, musical instruments, tools, instruments, and the like. The processing unit 15 serves as a processing means, and performs a process related to the second object based on a third region obtained by the first region and the second region. The third region is, for example, a region obtained by excluding a region in which the first region and the second region overlap each other, a region, of one region, overlapping with the other region, or the like. More specifically, it is a region, of the second region, obtained by excluding the first region. Examples of the process related to the second object include a process of identifying details of the second object, a process of making a notification related to the second object, and the like.
Next, an information processing method performed by the information processing device 10 according to the present example embodiment will be described with reference to a flowchart of
First, the image acquisition unit 11 acquires an image (S101). Next, the object detection unit 12 detects a first region including the first object and a second region including the second object from the acquired image (S102). The processing unit 15 performs a process related to the second object based on a third region obtained by the first region and the second region. The third region is, for example, a region, of the second region, obtained by excluding a region overlapping the first region.
As described above, according to the information processing device 10 of the first example embodiment, the object detection unit 12 obtains the first region including the first object and the second region including the second object from the image. The processing unit 15 can perform a process related to the second object based on the third region obtained by the first region and the second region. As a result, it is possible to perform a process in consideration of the third region obtained from the plurality of regions detected from the image.
A second example embodiment in the present disclosure will be described with reference to
The image acquisition unit 21 acquires images from various devices. The various devices are, for example, an imaging device, a mobile terminal, a storage device, and the like. The imaging device is not limited to a monitoring camera that monitors a specific area, but may be a wearable camera that can be worn by a person, a drone, an in-vehicle camera, or the like. The mobile terminal may be a portable terminal such as a smartphone. The storage device may be a database, a cloud storage, or the like. Such various devices may be configured integrally with the information processing device 20.
The object detection unit 22 detects a first region including the first object and a second region including the second object from the image. The first object is, for example, a person, an obstacle, a shield, or the like. The obstacle or the shield is, for example, a moving object or a building. The second object is a person different from the person detected as the first object or various objects carried by the person. The various objects carried by the person may include, for example, various objects classified as furniture, fruits, musical instruments, tools, instruments, and the like.
The object detection unit 22 learns a feature amount of an object and detects a region including an object similar to the learned feature amount from an image. Examples of such a detection method include region based convolutional neural networks (R-CNN), You Look Only Onse (YOLO), a single shot multibox detector (SSD), and the like. The object detection unit 22 may implement the process of detecting the first region including the first object and the process of detecting the second region including the second object by different methods. The object detection unit 22 may be separately configured by a first detection unit that detects a first region including a first object and a second detection unit that detects a second region including a second object. Furthermore, the first detection unit and the second detection unit may be incorporated in different devices.
The object detection unit 22 may extract regions having various shapes for the region including the object. The shape of such a region will be described with reference to
The region analysis unit 23 obtains region information from the third region obtained by the first region and the second region. The third region is, for example, a region obtained by excluding a region in which the first region and the second region overlap each other, a region, of one region, overlapping with the other region, or the like. More specifically, it is a region, of the second region, obtained by excluding a region overlapping the first region. The region information is information obtained from the shape of the third region and the position on the image. For example, it includes a length of the third region, an area of the third region, position information about the third region, and the like. Other than this, for example, the aspect ratio or the like of the third region may be obtained. The length of the third region, the area of the third region, and the position information about the third region will be described in detail with reference to
For example, as illustrated in
The first direction and the second direction may be determined based on, for example, the image 100. For example, a direction along the horizontal direction of the image 100 may be defined as the first direction, and a direction along the vertical direction of the image 100 may be defined as the second direction. Similarly, a direction along the vertical direction of the image 100 may be defined as the first direction, and a direction along the horizontal direction of the image 100 may be defined as the second direction. The second direction may be a direction orthogonal to the first direction with reference to the first direction. In
The first length 131 may be either the shortest length or the longest length among the lengths along the first direction obtained from the third region 130. Similarly, the second length may be either the shortest length or the longest length among the lengths 132 along the second direction.
For example, as illustrated in
For example, as illustrated in
The position information 134 of the third region may represent a relative positional relationship with the first object in consideration of the position of the first object on the image, the orientation of the first object, and the like. For example, it may represent a positional relationship such as “located right of the first object” based on the position of the first object on the image. Further, it may represent a positional relationship including a direction such as “located in front of the first object” based on the orientation of the first object.
In order to represent the relative positional relationship with the first object, for example, the position of the third region with respect to the first object may be obtained by the following method. For example, the position of the third region with respect to the first object may be determined by comparing the position of the first object on the image with the position of the third region 130 on the image. The position of the third region with respect to the first object may be obtained by comparing the orientation of the first object, the position of the first object on the image, and the position of the third region 130 on the image. The orientation of the first object may be obtained using, for example, a known posture estimation technique. Alternatively, it may be identified by obtaining the rotation angle of the first object.
As illustrated in
The processing unit 25 performs a process related to the second object based on the third region 130 obtained by the first region 110 and the second region 120. In a case where the region information obtained by the region analysis unit 23 satisfies a predetermined condition, the processing unit 25 may be configured to perform a process related to the second object. Examples of the process related to the second object include a process of identifying details of the second object, a process of making a notification related to the second object, and the like.
The predetermined condition is, for example, that the first length 131 in the third region 130 is longer than a threshold value, that the second length 132 in the third region 130 is longer than a threshold value, or the like. The predetermined condition may further include that the area 133 of the third region is larger than the threshold value. Further, the predetermined condition may include that the third region 130 is present at a predetermined position. The predetermined position may be, for example, a specific position on the image, or may be represented by a relative positional relationship with the first object, such as “the third region 130 is located in front of the first object”. The threshold value may be a common value or may be different for each predetermined condition.
A series of processes performed by the information processing device 20 of the present example embodiment will be described with reference to a flowchart of
First, the image acquisition unit 21 acquires images from various devices (S201). Next, the object detection unit 22 detects the first region 110 including the first object and the second region 120 including the second object from the image acquired by the image acquisition unit 21 (S202).
Then, the region analysis unit 23 obtains region information from the third region 130 obtained by the first region 110 and the second region 120 (S203). The third region 130 is, for example, a region obtained by excluding a region overlapping with the first region 110 in the second region 120. The region information is, for example, the first length 131 in the third region 130. The first length 131 is, for example, the longest length among the lengths in the direction along the horizontal direction of the image 100 in the third region 130. In the subsequent processing, the information processing device 20 determines whether the region information satisfies a predetermined condition (S204). The predetermined condition is, for example, that the first length 131 in the third region 130 is larger than a threshold value.
Then, in a case where it is determined in S204 that the region information satisfies the predetermined condition, the processing unit 25 performs a process related to the second object (S205), and the flow ends. Examples of the process related to the second object include a process of identifying details of the second object, a process of making a notification related to the second object, and the like. On the other hand, in a case where it is determined in S204 that the region information does not satisfy the predetermined condition, the flow ends.
In the flowchart illustrated in
As described above, according to the information processing device 20 of the present example embodiment, the region analysis unit 23 can obtain the region information about the third region 130. The processing unit 25 can perform a process related to the second object based on the region information. This makes it possible to accurately perform a process related to the second object.
The second example embodiment in the present disclosure is not limited to the above-described aspect, but may be configured as follows, for example.
For the object detected by the object detection unit 22, the information processing device 20 may be configured to leave the execution result of the process for the second object in the history for each detected first object. The history may include that the process related to the second object is not performed. According to such a configuration, it is possible to suppress repetition of the process related to the second object for the object that has already been processed. It is also possible to give priority to the first object that has not been processed with respect to the second object.
The information processing device 20 may limit the object candidates to be detected by the object detection unit 22. In a case where the candidates of the object to be detected are limited, for example, the limitation may be achieved by setting a priority.
The priority may be set according to, for example, the type of the object to be detected or the size of the object. For example, the priority may be set for a rough category such as “person”, “moving object”, or “building”. Alternatively, the priority may be set for a finer category such as “adult”, “car”, or “hospital”. The priority may be achieved, for example, by performing weighting for each category. Similarly, weighting may be performed according to the size of the object appearing in the image 100. For example, the larger the object, the higher the weight is set. The process of limiting the object candidates to be detected may be performed only when the first object is detected. Alternatively, the process may be performed only when the second object is detected. According to such a configuration, since the type of the object to be detected from the image can be limited, the process related to the second object can be efficiently performed.
The third example embodiment of the present disclosure will be described with reference to
The region analysis unit 33 obtains region information based on the plurality of third regions 130 obtained from the plurality of images in addition to the functions of the region analysis unit 23 of the second example embodiment described above. The process of obtaining region information from the plurality of third regions 130 will be described with reference to
As illustrated in
Then, the region analysis unit 33 obtains region information based on the plurality of first lengths 131 obtained in a certain period. For example, a value obtained by integrating a function determined by the first length 131 obtained at times t1 to t3 is obtained. Other than this, the region information may be obtained by adding, subtracting, or averaging the region information obtained from a plurality of respective images obtained within a certain period. Furthermore, instead of the certain period, the region information may be obtained from a certain number of images. Furthermore, the region information may be obtained from a certain number of third regions 130.
In
A series of processes performed by the information processing device 30 of the present example embodiment will be described with reference to a flowchart of
First, the information processing device 30 acquires the images 100 from various devices and obtains region information from the image 100 (S301 to S303). Thereafter, the information processing device 30 determines whether a certain period has elapsed (S304). The certain period may be, for example, a period from the start of the flow until the lapse of the certain period, or a period from the start of the flow and the acquisition of the first image until the lapse of the certain period. Alternatively, the measurement may be started with a specific process as a starting point.
Then, in a case where it is determined in S304 that the certain period has elapsed, the region analysis unit 33 obtains region information based on the region information obtained from each image (S305). For example, a value obtained by adding the first lengths 131 of the third region 130 is obtained. Other than this, a value obtained by adding the areas 133 of the third region may be obtained. On the other hand, when the certain period has not elapsed in S304, the process returns to S301, and the process of S301 to S303 is performed again.
Next, the information processing device 30 determines whether the region information satisfies a predetermined condition (S306). The predetermined condition is, for example, whether the sum of the first lengths 131 of the third region 130 is larger than a threshold value. Other than this, it may be determined that the sum of the areas 133 of the third region is larger than the threshold value. Then, in a case where it is determined in S306 that the region information satisfies the predetermined condition, the process related to the second object is performed (S307), and the flow ends. Examples of the process related to the second object include a process of identifying details of the second object, a process of making a notification related to the second object, and the like. On the other hand, in a case where it is determined that the region information does not satisfy the predetermined condition, the flow ends.
In the flowchart illustrated in
According to such an information processing device 30, the region analysis unit 33 can obtain the region information based on the plurality of third regions 130 obtained from the plurality of images. As a result, the process related to the second object can be performed more accurately.
The third example embodiment in the present disclosure is not limited to the above-described aspect, but may be configured as follows, for example.
The information processing device 30 may be configured to perform a flow line analysis or the like on the object detected by the object detection unit 32. Then, whether the first object appearing in the image is the same as the first object appearing in another image may be determined based on the flow line. According to this, it is possible to identify whether the objects appearing in the plurality of images are the same object. Therefore, it is possible to suppress the addition of a third region 130 obtained by a first object different from a first object detected in the other image and a second object.
The fourth example embodiment of the present disclosure will be described with reference to
The region adjustment unit 44 corrects the third region 130 or the region information based on any of the information about the image, the information about the first object, and the information about the second object. The information about the image may include information about the device that has generated the image. The information about the device that has generated the image may include, for example, a position, the imaging direction, a depression angle, a magnification of the lens, and the like of the imaging device. The information about the first object may include a position of the first object on the image, an orientation of the first object, and the like. The information about the second object may include a position of the second object on the image, an orientation of the second object, and the like. Further, the information about the first object and the information about the second object may include information obtained in consideration of images acquired in the past. The information obtained in consideration of the images acquired in the past is, for example, the position of the first object and the position of the second object at a specific time. A flow line of the first object, a flow line of the second object, and the like may be included.
A method of correcting the third region 130 will be described with reference to
The region adjustment unit 44 corrects the third region 130 based on the information about the image. The correction is performed using the moving direction of the first object. First, the region adjustment unit 44 acquires the moving direction of the first object appearing in the image 100. The moving direction of the first object may be obtained by analyzing a flow line or an optical flow of the first object. In a case where the first object is a person, the moving direction may be obtained by analyzing the line of sight of the person.
Then, an angle formed by the moving direction of the first object and the reference direction is obtained. As the reference direction, for example, a vertical direction or a horizontal direction in the image 100 may be used. The reference direction is described as the horizontal direction in the image 100. As a result, an angle formed by the moving direction V of the first object and the reference direction as illustrated in
Next, the region adjustment unit 44 corrects the third region 130. For example, as illustrated in
In the above example, the method of correcting the third region 130 is described, but the region information may be corrected by a similar method. In the case of correcting the region information, the region information may be obtained from the third region 130 and may be multiplied by a predetermined function using an angle as an argument.
The method is not limited to a method of performing correction based on the moving direction of the first object or the second object described above, but the region adjustment unit 44 may perform correction by the following method. For example, in a case where correction is performed based on information about the device that has generated the image 100, a function using a position, an imaging direction, a depression angle, a magnification of the lens, and the like of the imaging device as arguments may be defined. Such a function may be, for example, for conversion into the size or position of the third region 130 obtained when the first object and the second object are imaged from a predetermined viewpoint or distance. Similarly, the region information may be corrected by such a method.
Next, a method of correcting the region information and performing the process related to the second object based on the corrected region information will be described with reference to
The information processing device 40 acquires the images 100 from various devices and obtains region information from the image 100 (S401 to S403). The region adjustment unit 44 acquires information about the image (S404).
Next, the region adjustment unit 44 corrects the region information based on the acquired information about the image (S405). The region information corrected in this process is, for example, the first length 131 of the third region 130. Thereafter, the information processing device 40 determines whether the corrected region information satisfies a predetermined condition (S406). The predetermined condition is, for example, whether the first length 131 is larger than a threshold value.
In a case where it is determined in S406 that the predetermined condition is satisfied, the information processing device 40 performs a process related to the second object (S407), and the flow ends. Examples of the process related to the second object include a process of identifying details of the second object, a process of making a notification related to the second object, and the like. On the other hand, in a case where it is determined in S406 that the predetermined condition is not satisfied, the flow ends.
The processing order is not limited to a processing order illustrated in the flowchart illustrated in
As described above, according to the configuration of the fourth example embodiment, the region adjustment unit 44 can correct the third region 130 or the region information based on any of the information about the image, the information about the first object, and the information about the second object. As a result, the accuracy of the process related to the second object can be further improved. In processing various images, the third region or the region information obtained from each image can be converted to satisfy the same criterion, so that it is possible to suppress the variation in the execution result of the process related to the second object from for each image.
The fifth example embodiment of the present disclosure will be described with reference to
The processing unit 55 identifies details of the second object based on the third region 130 obtained by the first region 110 and the second region 120. In a case where the region information satisfies a predetermined condition, the processing unit 55 may be configured to identify details of the second object. The third region 130 is, for example, a region obtained by excluding a region in which the first region 110 and the second region 120 overlap each other, a region, of one region, overlapping with the other region in, or the like. More specifically, it is a region, of the second region 120, obtained by excluding a region overlapping with the first region 110. The details of the second object are not limited to, for example, the specific name of the second object, but may include a shape, a color, a size, and the like. The process of identifying the details of the second object may be achieved by using a detector adjusted to detect a specific object. For example, a detector for detecting a white cane is used.
The predetermined condition is, for example, that the first length 131 of the third region 130 is longer than a threshold value, that the second length 132 of the third region 130 is longer than a threshold value, or the like. The predetermined condition may further include that the area 133 of the third region is larger than the threshold value. Further, the predetermined condition may include that the third region 130 is present at a predetermined position. The predetermined position may be, for example, a specific position on the image, or may be represented by a relative positional relationship with the first object, such as “the third region 130 is located in front of the first object”. It may be determined whether results obtained by integrating, adding, subtracting, or averaging a plurality of pieces of region information satisfies various conditions as described above. The threshold value may be a common value or may be different for each determination.
The processing unit 55 may set a predetermined condition according to the type of the object whose details are desired to be identified. Furthermore, in a case where a predetermined condition is satisfied, detection regarding a specific object may not be performed. For example, in a case where the position information 134 of the third region 130 indicates that “the third region 130 is in front of and behind the first object”, a detector for detecting a white cane is not applied.
The detector may be configured by combining a plurality of detectors. In the case where the detector includes a plurality of detectors, a label may be given to a specific detector. The label is, for example, “walking aid” or “rain gear”.
The processing unit 55 may identify the details of the second object in consideration of reference information other than the region information. In the reference information, for example, in a case where the first object is a person, the attribute of the person may be treated as the reference information. The attribute of the person is information such as age, gender, and facial expression. As a result, in a case where the reference information indicating that the object is an elderly person is acquired from the image 100, it is possible to perform a process of preferentially applying a detector to which a label of the “walking aid” is given.
The reference information may include information about the environment acquired from the background of the image or the like. The information about the environment is, for example, information indicating the presence or absence of a place, facility, or equipment such as an intersection, a station yard, or a passage provided with a braille block. As a result, in a case where the reference information indicating that the second object exists around the braille block is acquired from the image 100, it is possible to perform a process of preferentially applying the detector to which the label of the “walking aid” is given.
Furthermore, the reference information may include information acquired from an external information providing device. Examples of the information acquired from such an external information providing device include information about weather and traffic. As a result, in a case where information such as rain and snow is acquired, it is possible to perform a process of preferentially applying a detector to which a label of the “walking aid” is given.
A series of processes performed by the information processing device 50 of the present example embodiment will be described with reference to a flowchart of
The information processing device 50 acquires the images 100 from various devices and obtains region information from the image 100 (S501 to S503). Then, the processing unit 55 determines whether the first length 131 of the third region 130 is longer than a threshold value (S504).
When it is determined in S504 that the first length 131 is longer than the threshold value, the processing unit 55 further determines whether the third region 130 is present in front of the first object (S505). In a case where it is determined in S505 that the third region 130 is present in front of the first object, the processing unit 55 performs a process of identifying details of the second object (S506), and the flow ends. On the other hand, in a case where it is determined in S504 that the first length 131 is not longer than the threshold value, the flow ends. When it is determined in S505 that the third region 130 is not present in front of the first object, the flow ends.
In the flow illustrated in
Next, a usage example of the information processing device 50 of the present example embodiment will be described with reference to
Next, the region analysis unit 53 obtains region information based on the third region 130 obtained by the first region 110 and the second region 120. As illustrated in
Then, the processing unit 55 determines whether the third region 130 satisfies a predetermined condition based on the region information obtained by the region analysis unit 53. The predetermined information in this usage example is that the first length 131 is longer than the threshold value, and the position information 134 is located in front of the first object. As a result of the determination, since the third region 130 satisfies the predetermined condition, the processing unit 55 identifies details of the second object.
As described above, according to the information processing device 50 of the present example embodiment, the processing unit 55 identifies the details of the second object based on the third region 130. This makes it possible to efficiently perform a process of identifying details of the object.
The fifth example embodiment in the present disclosure is not limited to the above-described aspect, but may be configured as follows, for example.
The information processing device 50 may be configured to consider the region information about the second region 120. The region information about the second region 120 may include values equivalent to various values that can be obtained by the region analysis unit 53. For example, it may include the first length, the second length, the aspect ratio, and the like of the second region 120. Then, for example, in a case where the aspect ratio of the second region 120 falls within a specific range, the information processing device 50 may perform a process of identifying details regarding the second object. Other than this, in a case where the aspect ratio of the second region 120 is larger or smaller than the threshold value, details regarding the second object may be identified. Such determination based on the region information about the second region 120 may be performed after it is determined whether the region information about the third region 130 satisfies a predetermined condition. Before it is determined whether the region information about the third region 130 satisfies a predetermined condition, the determination based on the region information about the second region 120 may be performed. According to this, it is possible to further improve the efficiency of the process of identifying details regarding the second object.
The sixth example embodiment of the present disclosure will be described with reference to
The processing unit 65 makes a notification regarding the second object based on the third region 130 obtained by the first region 110 and the second region 120. In a case where the region information satisfies a predetermined condition, the processing unit may be configured to identify details of the second object. The third region 130 is, for example, a region obtained by excluding a region in which the first region 110 and the second region 120 overlap each other, a region, of one region, overlapping with the other region in, or the like. More specifically, it is a region, of the second region 120, obtained by excluding a region overlapping with the first region 110. The notification related to the second object includes various notifications, for example, attention calling, support request, and the like. The target to whom the notification is to be made is a person detected as the first object, a person around the device that has acquired the image 100, or the like. In addition, notification may be made to a person in a remote place such as a monitoring center.
The predetermined condition is, for example, that the first length 131 of the third region 130 is longer than a threshold value, that the second length 132 of the third region 130 is longer than a threshold value, or the like. The predetermined condition may further include that the area 133 of the third region is larger than the threshold value. Further, the predetermined condition may include that the third region 130 is present at a predetermined position. The predetermined position may be, for example, a specific position on the image, or may be represented by a relative positional relationship with the first object, such as “the third region 130 is located in front of the first object”. It may be determined whether results obtained by integrating, adding, subtracting, or averaging a plurality of pieces of region information satisfies various conditions as described above. The threshold value may be a common value or may be different for each determination.
The processing unit 65 may change the content of the notification related to the second object according to the third region 130. Examples of the content of the notification include a length, the number of times, a frequency, an interval, and the like of the notification.
For example, in a case where it is determined that the second object is likely to harm the surroundings based on the region information, the processing unit 65 may make a notification. For example, in a case where the region information satisfies a predetermined condition, it may be determined that the object is likely to harm the surroundings. For example, in a case where the first length 131 of the third region 130 is longer than a threshold value, it is determined that the object is likely to harm the surroundings.
In addition to the predetermined condition, an index value such as the degree of congestion may be considered. For example, it is easily determined that the object is likely to harm the surroundings as the congestion degree indicates congestion. The degree of congestion may be calculated based on the number of persons appearing in the image 100, or the degree of congestion calculated by another device may be used. In order to make it easier to determine that the object is likely to harm the surroundings, the value of the threshold value used at the time of determination under a predetermined condition may be reduced.
Furthermore, a flow line of the first object or the second object may be considered. For example, the content of the notification, the condition for making the notification, and the target to whom the notification is to be made may be changed between a case where the third region 130 exists in the traveling direction of the first object and a case where the third region 130 exists in the direction intersecting the traveling direction of the first object. For example, in a case where a second object is present behind the first object, attention is called to the first object, and in a case where a second object is present beside the first object, attention is called to a person around the device that has generated the image 100.
The processing unit 65 may make a notification regarding the second object via various notification devices. The various notification devices are, for example, an audio output device such as a speaker or a display device such as a display. The notification device may be installed in a specific place such as a road, a facility, or a monitoring center, or may be a mobile or portable device. Furthermore, the device may have directivity to transmit information only to a specific person. The various notification devices and the information processing device 60 may be integrally configured.
A series of processes performed by the information processing device 60 of the present example embodiment will be described with reference to a flowchart of
The information processing device 60 obtains region information from the images 100 acquired from various devices (S601 to S603). Then, the processing unit 65 determines whether the first length 131 of the third region 130 is longer than a threshold value based on the region information acquired by the information processing device 60 (S604).
When it is determined in S604 that the first length 131 is longer than the threshold value, the processing unit 65 further determines whether the third region 130 is behind the first object (S605). In a case where it is determined in S605 that the third region 130 is behind the first object, the processing unit 65 makes a notification to the first object (S606), and the flow ends. The notification directed to the first object prompts improvement of manners, for example, “you may cause a danger to the surroundings”.
On the other hand, when it is determined in S605 that the third region 130 is not present behind the first object, the processing unit 65 further determines whether the third region 130 is present in front of the first object (S607). Then, in a case where it is determined in S607 that the third region 130 is present in front of the first object, the processing unit 65 makes a notification to a person around the device that has acquired the image 100 (S608), and the flow ends. The notification to a person around the device that has acquired the image 100 prompts avoidance of danger, for example, “there is a danger nearby”. On the other hand, in a case where it is determined in S604 or S607 that the condition is not satisfied, the flow ends without making the notification.
In the flow illustrated in
Next, a usage example of the information processing device 60 of the present example embodiment will be described with reference to
Next, the region analysis unit 53 obtains region information based on the third region 130 obtained by the first region 110 and the second region 120. As illustrated in
Then, the processing unit 65 determines whether the third region 130 satisfies a predetermined condition based on the region information obtained by the region analysis unit 63. The predetermined condition in this use example is that the first length 131 is longer than the threshold value and that the position information 134 is located in front of the first object. As a result of the determination, since the predetermined condition is satisfied, as illustrated in
In the use example illustrated in
As described above, according to the information processing device 60 of the present example embodiment, the processing unit 65 can make a notification related to the second object based on the third region 130. As a result, notification regarding the second object can be efficiently made.
The sixth example embodiment in the present disclosure is not limited to the above-described aspect, but may be configured as follows, for example.
The information processing device 60 may make a notification to a person registered in advance. For example, in a case where the processing unit 65 determines to make a notification, the information processing device 60 makes a notification to a person registered in advance. Notification may be made only to a person satisfying the notification condition from among persons registered in advance.
For example, in a case where the processing unit 65 determines to make a notification based on the image 100, the notification condition may include a condition that the object is present around the device that has generated the image 100. The process of identifying the person registered in advance may be performed in cooperation with the monitoring camera 210 and may be identified by a known face authentication technique. Furthermore, not only the face authentication but also various techniques utilizing position information such as geofencing may be used to identify whether a person registered in advance is present in a surrounding area of the device that has generated the image 100.
The information processing device 60 may store information about the first object or the second object detected when the processing unit 65 determines to make a notification in association with information about the notification. The information about the notification may include a notification date and time, a notification content, a notification method, and the like. Furthermore, in a case of notifying the same object that has already been stored, the processing unit 65 may make a notification using a method different from a method of a notification made in the past. For example, when it is determined that it is necessary to call attention to a specific person, in a case where there is a history of calling attention to the person through a speaker in the past, various notification devices are used instead of calling attention by the speaker, for example, the security guard in the vicinity is requested to call attention.
The processing unit 65 may make a notification regarding the second object in consideration of details of the first object or the second object. More specifically, the content of the notification, the condition for making the notification, and the target to whom the notification is to be made may be changed according to the details of the first object or the second object. For example, different notifications are made between a person who walks with an umbrella protruding forward and a person who walks with a white cane protruding forward. Details of the object are not limited to, for example, a specific name of the object, but may include a shape, a color, a size, and the like. The details of the object may be identified using the detector described above, or may be obtained by another method. According to this, a more detailed notification regarding the second object can be made.
(Hardware Configuration)
In each example embodiment of the present disclosure, each component of each device indicates a block of a functional unit. Part or all of each component of each device is achieved by, for example, an any combination of an information processing device 500 and a program as illustrated in
Each component of each device in respective example embodiments is achieved by the CPU 501 acquiring and executing the program 504 for achieving these functions. More specifically, the CPU 501 is implemented by executing various programs such as a program for acquiring the image 100, a program for detecting the first region 110 including the first object and the second region 120 including the second object from the image 100, and a program for executing a process related to the second object based on the third region 130 obtained by the first region 110 and the second region 120, and performing an update process of various parameters held in the RAM 503, the storage device 505, and the like.
The program 504 for achieving the function of each component of each device is stored in the storage device 505 or the ROM 502 in advance, for example, and is read by the CPU 501 as necessary. The program 504 may be supplied to the CPU 501 via the communication network 509. The drive device 507 may read a program stored in advance in the recording medium 506 and supply the program to the CPU 501.
The program 504 can display the progress of the process or the processing result via the output device. Furthermore, it is possible to communicate with an external device via a communication interface. The program 504 can be recorded in a computer-readable (non-transitory) storage medium.
Each device is not limited to having the above-described configuration, but can be achieved by various configurations. For example, each device may be achieved by combining, in any manner, the information processing devices 500 and the programs 504 each of which is different in respective configurations. A plurality of components included in each device may be achieved by an any combination of one information processing device 500 and the programs 504.
Part or all of each component of each device is achieved by another general-purpose or dedicated circuit, processor, or the like, or a combination thereof. These may be configured by a single chip or may be configured by a plurality of chips connected via a bus 511.
Part or all of each component of each device may be achieved by a combination of the above-described circuit or the like and the program.
In a case where part or all of each component of each device is achieved by a plurality of information processing devices, circuits, and the like, the plurality of information processing devices, circuits, and the like may be disposed in a centralized manner or in a distributed manner. For example, the information processing device, the circuit, and the like may be achieved as a form of an information processing system in which a client and server system, a cloud computing system and the like are connected to each other via the communication network 509.
In a case where they are achieved as the form of the information processing system, for example, one or a plurality of information processing devices may include an image acquisition means configured to acquire the image 100, an object detection means configured to detect the first region 110 including the first object and the second region 120 including the second object from the image 100, and a processing means configured to perform a process related to the second object based on the third region 130 obtained by the first region 110 and the second region 120. Not limited thereto, part or all of the image acquisition means, the object detection means, and the processing means may be configured as an imaging device, a display device, an edge terminal, or the like. For example, the image acquisition means is configured as the imaging device, the object detection means is configured as the information processing device 500, and the processing means is configured as the display device. Then, the imaging device, the information processing device 500, and the display device may be connected by using the communication network 509, the bus 511, or the like to be achieved as the information processing system.
Each of the above-described example embodiments is a preferred example embodiment of the present disclosure, and the scope of the present disclosure is not limited only to each of the above-described example embodiments. That is, it is possible for those skilled in the art to make modifications and substitutions of the above-described example embodiments without departing from the gist of the present disclosure, and to construct a mode in which various modifications are made.
Some or all of the above example embodiments may be described as the following Supplementary Notes, but are not limited to the following.
An information processing device including
The information processing device according to Supplementary Note 1, wherein
The information processing device according to Supplementary Note 1 or 2, wherein
The information processing device according to Supplementary Note 3, wherein
The information processing device according to Supplementary Note 4, wherein
The information processing device according to Supplementary Note 5, wherein
The information processing device according to any one of Supplementary Notes 3 to 6, wherein
The information processing device according to Supplementary Note 7, wherein
The information processing device according to any one of Supplementary Notes 3 to 8, wherein
The information processing device according to Supplementary Note 9, wherein
The information processing device according to Supplementary Note 10, wherein
The information processing device according to any one of Supplementary Notes 3 to 11, wherein
The information processing device according to Supplementary Note 12, wherein
The information processing device according to any one of Supplementary Notes 1 to 13, wherein
The information processing device according to any one of Supplementary Notes 3 to 14, wherein
The information processing device according to Supplementary Note 14, wherein
The information processing device according to any one of Supplementary Notes 3 to 16, wherein
The information processing device according to any one of Supplementary Notes 3 to 16, wherein
An information processing system including
An information processing method including acquiring an image,
The forms of the Supplementary Notes 19 to 20 can be expanded to the forms of the Supplementary Notes 2 to 18, as in the Supplementary Note 1.
The previous description of embodiments is provided to enable a person skilled in the art to make and use the present invention. Moreover, various modifications to these example embodiments will be readily apparent to those skilled in the art, and the generic principles and specific examples defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not intended to be limited to the example embodiments described herein but is to be accorded the widest scope as defined by the limitations of the claims and equivalents.
Further, it is noted that the inventor's intent is to retain all equivalents of the claimed invention even if the claims are amended during prosecution.
Number | Date | Country | Kind |
---|---|---|---|
2022-089044 | May 2022 | JP | national |