The present invention relates to an image processing apparatus, an image processing method, and a program.
In recent years, image processing is used for various purposes. For example, Patent Document 1 describes a system in which a reaching range of a droplet from a first target person is deviated from a respiratory area of a second target person by adjusting an environment within a space, wherein a position and an orientation of a face of each of the first target person and the second target person are determined by image processing.
In order to suppress spread of an infectious disease, it is preferable to appropriately take a measure for securing a person-to-person distance, such as, for example, “securing an interval between seats”, “limiting the number of visitors”, or “increasing a distance between persons in line” at a place (such as a store, an institution, or a facility) where people are gathering. Recognizing a place where such a measure is not appropriately taken enables to take various measures such as “avoiding the place”, “encouraging improvement for the place”, “ceasing operation at the place” for suppressing spread of an infectious disease.
Meanwhile, it is difficult to thoroughly take a measure for securing a distance also for a group such as family members or lovers under authority of an administrator of each place. Therefore, it is not reasonable to evaluate that a measure for suppressing spread of an infectious disease is not appropriately taken at such a place, based on that a distance between companions is not secured.
In view of circumstances as described above, it is difficult to recognize whether a measure for suppressing spread of an infectious disease is appropriately taken at each place. One of objects of the present invention is to make it easy to recognize whether a measure for suppressing spread of an infectious disease is appropriately taken at each place.
The present invention provides an image processing apparatus including:
a companion determination unit that determines a companion of each of a plurality of persons detected from an image, based on the image;
a distance computation unit that computes a first distance being a distance to a nearest person among the persons other than a companion, for the each person; and
a risk information generation unit that generates, with use of the first distance, infection risk information being information related to a risk of getting an infectious disease or a safety rate of not getting an infectious disease within a target region being a region included in the image.
Further, the present invention provides an image processing method including,
by a computer:
determining a companion of each of a plurality of persons detected from an image, based on the image;
computing a first distance being a distance to a nearest person among the persons other than a companion, for the each person; and
generating, with use of the first distance, infection risk information being information related to a risk of getting an infectious disease or a safety rate of not getting an infectious disease within a target region being a region included in the image.
Further, the present invention provides a program causing a computer to function as:
a companion determination unit that determines a companion of each of a plurality of persons detected from an image, based on the image;
a distance computation unit that computes a first distance being a distance to a nearest person among the persons other than a companion, for the each person; and
a risk information generation unit that generates, with use of the first distance, infection risk information being information related to a risk of getting an infectious disease or a safety rate of not getting an infectious disease within a target region being a region included in the image.
The present invention makes it easy to recognize whether a measure for suppressing spread of an infectious disease is appropriately taken at each place.
Hereinafter, an example embodiment according to the present invention is described with reference to the drawings. Note that, in all drawings, a similar constituent element is indicated by a similar reference sign, and description thereof is omitted as necessary.
The image capturing apparatus 20 is, for example, a fixed camera, and repeatedly photographs a region (hereinafter, described as a target region) where a plurality of persons, for example, an unspecified large number of persons come and go. Therefore, an image to be generated by the image capturing apparatus 20 includes a plurality of persons. The image capturing apparatus 20 may also be a surveillance camera installed, for example, in a store, an institution, a facility, or the like. A frame rate of an image to be generated by the image capturing apparatus 20 is optional, but may be, for example, a frame rate at which a moving image is composed. As one example, the image capturing apparatus 20 is communicably connected to the image processing apparatus 10, and can transmit a generated image to the image processing apparatus 10.
The image processing apparatus 10 computes, by processing an image generated by the image capturing apparatus 20, a distance between persons within a target region, specifically, a distance (hereinafter, described as a first distance) between a certain person (hereinafter, described as a person as a reference), and a person nearest to the certain person. Further, the image processing apparatus 10 generates, with use of the first distance, information (hereinafter, infection risk information) related to a risk of getting an infectious disease or a safety rate of not getting an infectious disease within a target region. The infection risk information becomes an index as to whether a measure for suppressing spread of an infectious disease is appropriately taken within the target region.
In the example illustrated in
The image acquisition unit 11 acquires an image including a plurality of persons generated by the image capturing apparatus 20. The image acquisition unit 11 causes the storage unit 16 to store an acquired image. In a case where the image processing apparatus 10 and the image capturing apparatus 20 are communicably connected to each other, the image acquisition unit 11 can receive an image from the image capturing apparatus 20. In addition to the above, an image generated by the image capturing apparatus 20 may be stored in any storage apparatus. Further, the image acquisition unit 11 may acquire an image stored in the storage apparatus at any timing.
The image acquisition unit 11 may acquire an image generated by the image capturing apparatus 20 by batch processing, or may acquire by real-time processing. In a case of performing acquisition and analyzation of an image by batch processing, it is possible to comprehensively evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken within each target region, based on an image for a predetermined time period (example: for one hour, for one day, for one week, or the like). On the other hand, in a case of performing acquisition and analyzation of an image by real-time processing, it is possible to evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken within each target region at a timing at which a latest image is acquired.
The companion determination unit 12 determines, by processing an image acquired by the image acquisition unit 11, a companion of each of a plurality of persons detected from the image. Since a technique for detecting a person from an image is widely known, description thereof is omitted herein. Hereinafter, one example of a method of determining a companion of each person is described.
The companion determination unit 12 determines, based on a duration of a time of a state in which a distance between any person (hereinafter, described as a first person) and another person among a plurality of detected persons is equal to or less than a threshold value, a companion of the first person. Specifically, the companion determination unit 12 determines that another person whose duration of time is equal to or more than a threshold value is a companion of the first person. A specific example of a distance computation method based on an image is described later.
The companion determination unit 12 determines a companion of the first person, based on lines of sight of the first person and another person among a plurality of detected persons. Specifically, the companion determination unit 12 determines a companion of the first person, based on at least either one of a duration of time of a line-of-sight state in which lines of sight of the first person and another person are directed toward each other (looking at each other), and a frequency of occurrence of the line-of-sight state within a predetermined time. The frequency is “the number of times”, “a ratio of a time during which the line-of-sight state is attained with respect to a predetermined time”, or the like.
More specifically, the companion determination unit 12 determines that another person whose duration of time of a line-of-sight state in which lines of sight of the first person and the another person are directed toward each other (looking at each other) is equal to or more than a threshold value is a companion of the first person. Further, the companion determination unit 12 determines that another person whose frequency of occurrence of the line-of-sight state with respect to the first person is equal to or more than a threshold value within a predetermined time is a companion of the first person.
The companion determination unit 12 determines a companion of the first person, based on a physical contact status of the first person and another person among a plurality of detected persons. For example, the companion determination unit 12 may determine that another person who satisfies one or any plurality of the following conditions is a companion of the first person.
In addition to determination references of the first to third examples, the companion determination unit 12 determines a companion of the first person, based on attributes of the first person and another person among a plurality of detected persons. The attribute is estimable from an image, and is such as, for example, “a type being selected from an adult, a child, and an old person”, “age”, and “gender”. The companion determination unit 12 determines that another person who satisfies determination references of the first to third examples, and satisfies a condition regarding an attribute relationship with respect to the first person is a companion of the first person. For example, in a case where the first person is an “adult”, a condition of a companion is a “child”.
The companion determination unit 12 determines that another person who satisfies any plurality of determination references among determination references of the first to fourth examples is a companion of the first person.
The distance computation unit 13 computes the first distance being a distance to a nearest person among persons other than a companion, for each detected person. A specific example of a first distance computation method is described later.
The risk information generation unit 14 generates, with use of the first distance, infection risk information within a target region being a photographing target of the image capturing apparatus 20. As one example, the risk information generation unit 14 determines whether the first distance is equal to or less than a reference value, and generates infection risk information by using the determination result. The reference value is defined based on a so-called social distance. The social distance is a physical distance that should be secured between persons adjacent to each other in order to prevent infection of an infectious disease. Further, magnitude of the reference value is set based on a main infection route of a target infectious disease. For example, a value being equal to or more than 1.5 m and being equal to or less than 6 m is used as a reference value for an infectious disease in which droplet infection is a main cause. Further, regarding an infectious disease in which contact infection is a main cause, a value being equal to or more than 50 cm and being equal to or less than 1.5 m is used as a reference value. The risk information generation unit 14 can store, in the storage unit 16, generated infection risk information in association with information indicating each target region. The storage unit 16 may further store each target region and information for identifying a place (such as a store, an institution, or a facility) including each target region in association with each other.
Note that, infection risk information indicates, for example, a risk of getting an infectious disease or a safety rate of not getting an infectious disease within a target region. Such infection risk information becomes an index as to whether a measure for suppressing spread of an infectious disease is appropriately taken within the target region. For example, the following methods are cited as a method of generating infection risk information from the above-described determination result.
The risk information generation unit 14 computes, for each image, the number of combinations of persons in which the first distance becomes equal to or less than a reference value, and increases a risk indicated by infection risk information, as the number increases. Using this method enables the risk information generation unit 14 to generate infection risk information for each image. Specifically, it is possible to evaluate, based on a content represented by each image, whether a measure for suppressing spread of an infectious disease is appropriately taken at a point of time when each image is generated.
The risk information generation unit 14 derives the number of combinations of persons in which the first distance becomes equal to or less than a reference value, for each image group for a unit time period, and increases a risk indicated by infection risk information, as a statistical value (such as an average value, a maximum value, a minimum value, a mode, or a median) thereof increases. Using this method enables to comprehensively evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken within each target region, based on the entirety of a plurality of images generated for a predetermined time period (example: thirty minutes, one hour, one day, or one week).
The risk information generation unit 14 derives the number of combinations of persons in which the first distance becomes equal to or less than a reference value, for each image group for a unit time period, and increases a risk indicated by infection risk information, as the most recent number per unit time increases. Using this method enables to evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken at a point of time when the image group is generated, based on a content represented by a predetermined number of most recent images.
The risk information generation method 14 derives “the number of combinations of persons in which a state that the first distance is equal to or less than a reference value is continued for a reference period of time or longer”, in place of “the number of combinations of persons in which the first distance becomes equal to or less than a reference value” in the method 2. Further, it is assumed that other conditions are similar to those of the method 2. Using this method enables to comprehensively evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken within each target region, based on the entirety of a plurality of images generated for a predetermined time period (example: thirty minutes, one hour, one day, or one week).
The risk information generation unit 14 derives “the number of combinations of persons in which a state that the first distance is equal to or less than a reference value is continued for a reference period of time or longer”, in place of “the number of combinations of persons in which the first distance becomes equal to or less than a reference value” in the method 3. Further, it is assumed that other conditions are similar to those of the method 3. Using this method enables to evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken at a point of time when the image group is generated, based on a content represented by a predetermined number of most recent images.
In the methods 2 to 5, the risk information generation unit 14 computes the above-described number of combinations per unit time and per unit area, and increases a risk indicated by infection risk information, as the number increases.
Note that, in the methods 2 to 6, the risk information generation unit 14 uses a processing result on a plurality of images generated at a different timing. Further, it is possible to determine a same person being present redundantly within different images by using a feature value (such as face information) of an external appearance or position information of a person, and identify a combination with respect to an already detected person and a combination with respect to a newly detected person from each other.
Note that, the risk information generation unit 14 may use, as infection risk information, a fact as it is in which the first distance becomes equal to or less than a reference value.
The output unit 15 outputs various pieces of information stored in the storage unit 16. The output unit 15 may display infection risk information on a display apparatus such as a display or a projection apparatus. In addition to the above, the output unit 15 may store the infection risk information in a predetermined web server, and allow an unspecified large number of persons to be accessible via a communication network such as the Internet. Another example of information to be output from the output unit 15 is described later.
The bus 1010 is a data transmission path along which the processor 1020, the memory 1030, the storage device 1040, the input/output interface 1050, and the network interface 1060 mutually transmit and receive data. However, a method of mutually connecting to the processor 1020 and the like is not limited to bus connection.
The processor 1020 is a processor to be achieved by a central processing unit (CPU), a graphics processing unit (GPU), or the like.
The memory 1030 is a main storage apparatus to be achieved by a random access memory (RAM) or the like.
The storage device 1040 is an auxiliary storage apparatus to be achieved by a hard disk drive (HDD), a solid state drive (SSD), a memory card, a read only memory (ROM), or the like. The storage device 1040 stores a program module achieving each function (e.g., the image acquisition unit 11, the companion determination unit 12, the distance computation unit 13, the risk information generation unit 14, and the output unit 15) of the image processing apparatus 10. The processor 1020 achieves each function associated with the program module by reading each program module in the memory 1030 and executing each program module. Further, the storage device 1040 also functions as the storage unit 16.
The input/output interface 1050 is an interface for connecting the image processing apparatus 10 and various pieces of input/output equipment with each other.
The network interface 1060 is an interface for connecting the image processing apparatus 10 to a network. The network is, for example, a local area network (LAN) or a wide area network (WAN). A method of connecting the network interface 1060 to a network may be wireless connection, or may be wired connection. The image processing apparatus 10 may communicate with the image capturing apparatus 20 via the network interface 1060.
First, the image acquisition unit 11 acquires a plurality of images for a predetermined time period (example: thirty minutes, one hour, one day, one week, or the like) being a processing target (S10). Next, the companion determination unit 12 analyzes the plurality of images for the predetermined time period, detects a person included within the images, and also determines a companion for each person (S20). Then, the companion determination unit 12 registers information indicating a companion in association with each of the detected persons. The companion determination unit 12 can determine a same person being present redundantly within different images by using, for example, a feature value of an external appearance or position information of a person. Then, the companion determination unit 12 can identify the plurality of detected persons from one another by using, for example, the feature value of the external appearance. Further, the companion determination unit 12 can determine a companion of each person, for example, based on any of the above-described first to fifth examples.
When the companion determination unit 12 finishes an analysis of all of the plurality of images for the predetermined time period, the distance computation unit 13 analyzes each of the plurality of images for the predetermined time period, and computes a distance (first distance) to a nearest person among the persons other than a companion, for each person (S30). The distance computation unit 13 can recognize a companion of each person, based on a determination result in S20. In computation in S30, the distance computation unit 13 computes the first distance by using a height and a position of a person being a distance computation target within an image, and an upward/downward orientation of the image capturing apparatus 20 that has generated the image. At this occasion, as will be described later in detail, the distance computation unit 13 uses a value (hereinafter, described as a reference height) set in advance as a height of a person.
When the distance computation unit 13 finishes an analysis of all of the plurality of images for the predetermined time period, the risk information generation unit 14 generates infection risk information by using the first distance generated in S30. The risk information generation unit 14 generates infection risk information, for example, based on any of the above-described methods 1 to 6. In a case of the example being batch processing, for example, the risk information generation unit 14 may adopt any of the methods 2, 4, and 6, and comprehensively evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken within each target region, based on the entirety of a plurality of images generated for a predetermined time period. Note that, the risk information generation unit 14 may adopt any of the methods 1, 3, 5, and 6, and evaluate a status at each time.
Thereafter, the output unit 15 outputs infection risk information generated in S40 (S50). In this case, the output unit 15 may determine a target region where a risk of getting an infectious disease indicated by infection risk information is higher than a predetermined level (a target region where safety is lower than a predetermined level), and output information (such as a list of a determined target region) indicating the determined target region. Further, the output unit 15 may output information (such as a list of a determined place) indicating a place including the determined target region. As described above, it is possible to determine a place including a determined target region by associating a target region to be captured by each image capturing apparatus 20 and information for identifying a place (such as a store, an institution, or a facility) including each target region with each other, and using the information.
First, the image acquisition unit 11 acquires one image being a processing target (S10). Next, the companion determination unit 12 detects a person included within a newly acquired image, and also determines a companion for each person, based on the image or an image acquired so far (S20). Then, the companion determination unit 12 registers information indicating a companion in association with each of the detected persons. The companion determination unit 12 can determine a same person being present redundantly within different images by using, for example, a feature value of an external appearance or position information of a person. Then, the companion determination unit 12 can identify the plurality of detected persons from one another by using, for example, the feature value of the external appearance. Further, the companion determination unit 12 can determine a companion of each person, for example, based on any of the above-described first to fifth examples.
Subsequently, the distance computation unit 13 analyzes a newly acquired image, and computes a distance (first distance) to a nearest person among the persons other than a companion, for each person (S30). The distance computation unit 13 can recognize a companion of each person, based on a result determined in S20 up to the point of time. In computation in S30, the distance computation unit 13 computes the first distance by using a height and a position of a person being a distance computation target within an image, and an upward/downward orientation of the image capturing apparatus 20 that has generated the image. At this occasion, as will be described later in detail, the distance computation unit 13 uses a value (hereinafter, described as a reference height) set in advance as a height of a person.
Subsequently, the risk information generation unit 14 generates infection risk information by using the first distance generated in S30. The risk information generation unit 14 generates infection risk information, for example, based on any of the above-described methods 1 to 6. In a case of the example being real-time processing, the risk information generation unit 14 may adopt any of the methods 1, 3, 5, and 6, and evaluate a status at each time. Note that, the risk information generation unit 14 may adopt any of the methods 2, 4, and 6, and comprehensively evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken within each target region, based on the entirety of a plurality of images generated for a predetermined time period up to the point of time.
Thereafter, the output unit 15 outputs infection risk information generated in S40 (S50). Then, thereafter, the image processing apparatus 10 repeats pieces of processing from S10 to S50.
First, the distance computation unit 13 computes a height t of a person as a reference, or a person located around the person within an image. Herein, t is represented, for example, by the number of pixels. Subsequently, the distance computation unit 13 computes a distance d from the person as a reference to the person located around the person within the image. Herein, d is represented by the same unit (e.g., the number of pixels) as t. Subsequently, the distance computation unit 13 computes d/t, and computes a distance between the person as a reference and the person located around the person by multiplying the value d/t by the above-described reference height.
In a case where there is only one another person around a person as a reference, a distance computed for the another person becomes the first distance. Further, in a case where there are a plurality of other persons, the above-described distance is computed for each of these plurality of persons, and a minimum value of the distances becomes the first distance.
Note that, as described above, a reference height is set in advance. The reference height may be changed according to a place (e.g., a country) where the image capturing apparatus 20 is installed. For example, an average height of adults in a country where the target image capturing apparatus 20 is installed is used as the reference height. As a specific example of processing, the storage unit 16 stores information for determining a reference height, for each piece of image capturing apparatus identification information. Further, the distance computation unit 13 acquires image capturing apparatus identification information of the image capturing apparatus 20 that has generated an image being a processing target, and uses a reference height associated with the image capturing apparatus identification information by reading the reference height from the storage unit 16.
Further, in a case where an attribute (e.g., at least either one of gender and an age group) of a person being a computation target of the height t can be estimated by image processing, the distance computation unit 13 may change the reference height according to the attribute.
Note that, in most of images, a distortion specific to the image capturing apparatus 20 that has generated the image occurs. When computing the first distance, it is preferable that the distance computation unit 13 performs processing of correcting the distortion. The distance computation unit 13 performs distortion correction processing according to a position of a person within an image. Generally, a distortion of an image results from, for example, an optical system (e.g., a lens) included in the image capturing apparatus 20, and an upward/downward orientation (e.g., an angle with respect to a horizontal plane) of the image capturing apparatus 20. In view of the above, a content of distortion correction processing according to a position of a person within an image is set according to an optical system (e.g., a lens) included in the image capturing apparatus 20, and an upward/downward orientation of the image capturing apparatus 20.
Note that, in processing described by using
In the example illustrated in
Note that, in a case where an image to be displayed is a moving image, as illustrated in
In the example illustrated in
Displays illustrated in
The second distance is a distance between a person as a reference and a second nearest person to the person. A second distance computation method is similar to the first distance computation method except for a point that a distance to a second nearest person is selected, instead of a distance to a nearest person. Further, when infection risk information is generated, the risk information generation unit 14 increases a risk (lowers a safety rate), as the second distance decreases. Note that, the distance computation unit 13 may further generate a distance (third distance) between a person as a reference and a third nearest person to the person. In this case, the risk information generation unit 14 generates infection risk information, further with use of the third distance.
Specifically, S10 to S30 are similar to the example illustrated in
“The orientation of a face of a person” at least includes at least either one of an orientation of a face of a person as a reference, and an orientation of a face of a nearest person to the person. Further, the risk information generation unit 14 increases a risk (lowers a safety rate) indicating infection risk information, as a face of a person approaches a direction facing a partner. Herein, in a case where the second distance or the third distance is used, the risk information generation unit 14 may further use an orientation of a face of a person being a partner when the second distance is computed, or an orientation of a face of a person being a partner when the third distance is computed.
“The presence or absence of a wearable object for a face” at least includes at least either one of presence or absence of a wearable object for a person as a reference, and presence or absence of a wearable object for a nearest person to the person. Further, in a case where a wearable object of a specific type is detected, the risk information generation unit 14 lowers a risk (raises a safety rate) indicating infection risk information, as compared with a case other than the above. Herein, a wearable object of a specific type is an object for covering at least either one of a mouth and a nose (preferably, both), for example, a mask or a muffler. Herein, in a case where the second distance or the third distance is used, the risk information generation unit 14 may further perform a similar operation for a person being a partner when the second distance is computed, or a person being a partner when the third distance is computed.
“The motion of a mouth” is that at least a mouth is moving. In a case where the mouth is moving, it is highly likely that the person is talking. In view of the above, in a case where a mouth is moving regarding at least either one of a person as a reference, and a nearest person to the person, the risk information generation unit 14 increases a risk (lowers a safety rate) indicating infection risk information, as compared with a case other than the above. Herein, in a case where the second distance or the third distance is used, the risk information generation unit 14 may further use a motion of a mouth of a person being a partner when the second distance is computed, or a motion of a mouth of a person being a partner when the third distance is computed.
As described above, according to the present example embodiment, by acquiring and processing an image generated by the image capturing apparatus 20, specifically, an image including a plurality of persons, the image processing apparatus 10 computes a distance (first distance) to a nearest person for the person regarding at least a part of the plurality of persons. The image processing apparatus 10 generates, with use of the first distance, infection risk information within a target region being a photographing target of the image capturing apparatus 20. Therefore, it is possible to recognize whether a measure for suppressing spread of an infectious disease is appropriately taken within a target region.
Meanwhile, it is difficult to thoroughly take a measure for securing a distance also for a group such as family members or lovers under authority of an administrator of each place. Therefore, it is not reasonable to evaluate that a measure for suppressing spread of an infectious disease is not appropriately taken at such a place, based on that a distance between companions is not secured. In contrast, the image processing apparatus 10 can generate infection risk information within each target region, without determining a companion by analyzing an image, and considering that a distance between companions is not secured. Consequently, it is possible to appropriately evaluate whether a measure for suppressing spread of an infectious disease is appropriately taken at each place, while suppressing the above-described inconvenience.
As described above, while the example embodiment according to the present invention has been described with reference to the drawings, the example embodiment is an example of the present invention, and various configurations other than the above can also be adopted.
Further, in a plurality of flowcharts used in the above description, a plurality of processes (pieces of processing) are described in order, but an order of execution of processes to be performed in each example embodiment is not limited to the order of description. In each example embodiment, the order of illustrated processes can be changed within a range that does not adversely affect a content. Further, the above-described example embodiments can be combined, as far as contents do not conflict with each other.
A part or all of the above-described example embodiment may also be described as the following supplementary notes, but is not limited to the following.
1. An image processing apparatus including:
a companion determination unit that determines a companion of each of a plurality of persons detected from an image, based on the image;
a distance computation unit that computes a first distance being a distance to a nearest person among the persons other than a companion, for the each person; and
a risk information generation unit that generates, with use of the first distance, infection risk information being information related to a risk of getting an infectious disease or a safety rate of not getting an infectious disease within a target region being a region included in the image.
2. The image processing apparatus according to supplementary note 1, wherein
the companion determination unit determines a companion of a first person, based on a duration of time of a state in which a distance between the first person and another person among a plurality of the persons is equal to or less than a threshold value.
3. The image processing apparatus according to supplementary note 1 or 2, wherein
the companion determination unit determines a companion of a first person, based on lines of sight of the first person and another person among a plurality of the persons.
4. The image processing apparatus according to supplementary note 3, wherein
the companion determination unit determines a companion of the first person, based on at least either one of a duration of time of a line-of-sight state in which lines of sight of the first person and the another person are directed toward each other, and a frequency of occurrence of the line-of-sight state within a predetermined time.
5. The image processing apparatus according to any one of supplementary notes 1 to 4, wherein
the companion determination unit determines a companion of a first person, based on a physical contact status of the first person and another person among a plurality of the persons.
6. The image processing apparatus according to any one of supplementary notes 2 to 5, wherein
the companion determination unit determines a companion of a first person, based on attributes of the first person and another person among a plurality of the persons.
7. The image processing apparatus according to any one of supplementary notes 1 to 6, wherein
the distance computation unit computes a second distance being a distance to a second nearest person among the persons other than a companion, for the each person, and
the risk information generation unit generates the infection risk information, further with use of the second distance.
8. An image processing method including,
by a computer:
determining a companion of each of a plurality of persons detected from an image, based on the image;
computing a first distance being a distance to a nearest person among the persons other than a companion, for the each person; and
generating, with use of the first distance, infection risk information being information related to a risk of getting an infectious disease or a safety rate of not getting an infectious disease within a target region being a region included in the image.
9. A program causing a computer to function as:
a companion determination unit that determines a companion of each of a plurality of persons detected from an image, based on the image;
a distance computation unit that computes a first distance being a distance to a nearest person among the persons other than a companion, for the each person; and
a risk information generation unit that generates, with use of the first distance, infection risk information being information related to a risk of getting an infectious disease or a safety rate of not getting an infectious disease within a target region being a region included in the image.
This application is based upon and claims the benefit of priority from Japanese patent application No. 2020-098403, filed on Jun. 5, 2020, the disclosure of which is incorporated herein in its entirety by reference.
Number | Date | Country | Kind |
---|---|---|---|
2020-098403 | Jun 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/008882 | 3/8/2021 | WO |