This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2016-160197, filed on Aug. 17, 2016, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein are related to a recording medium storing a moving object group detection program, a moving object group detection device, and a moving object group detection method.
Technology that tracks persons using footage captured by a monitoring camera was known hitherto.
For example, there is a proposal for a person tracking device that detects a person region, which is a region assigned to a person included in footage, and generates person region information detailing information regarding the person region. The person tracking device chooses a distinctive person, who is a person having a specific feature amount, from amongst passersby accompanying a tracking-target person, and computes a distinctive person tracking result that is a tracking result for the distinctive person. Then, the person tracking device computes a tracking result for the tracking-target person from the distinctive person tracking result and from tracking-target person relative position information representing the position of the distinctive person relative to the tracking-target person.
International Publication Pamphlet No. WO 2012/131816
According to an aspect of the embodiments, a non-transitory recording medium storing a moving object group detection program causes a computer to execute a process. The process includes: respectively analyzing a first captured image captured by a first image capture device and a second captured image captured by a second image capture device, and respectively extracting a first image region and a second image region from the first captured image and the second captured image, the first image region and the second image region being regions in which coloring patterns satisfy a predetermined similarity range and moving in corresponding directions over plural frames; and detecting that a common moving object group is included in the first image region and the second image region on the basis of an evaluation of similarity between an image within the first image region and an image within the second image region.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
A case is considered in which persons, serving an example of objects, are tracked based on captured images and movement trends of the persons are acquired. In this case, conceivably, persons are detected from captured images captured by plural image capture devices, associations are made between persons detected from each captured image, and the movement path of each person is generated in accordance with the association results.
For example, an example is considered of a case in which an image capture device 1A, and image capture device 1B, and an image capture device 1C serve as the plural image capture devices, as illustrated in
When captured images are acquired, regions representing persons are detected from each captured image and the color of the clothing of the person, the sex of the person, the physique of the person, and the like are extracted as feature information like in table 2A illustrated in
Note that, in cases in which little feature information can be extracted from the tracking-target person, the same person may conceivably be associated across image capture devices by detecting distinctive persons from amongst persons surrounding a tracking-target person and making associations with the tracking-target person in accordance with their positions relative to the distinctive person.
Here, a case is considered in which associations between persons are made using distinctive persons present in the surroundings of the tracking-target person in a highly crowded environment. As illustrated in
Accordingly, in a highly crowded environment, features of a distinctive person 4X present in the surroundings of the tracking-target person 4a are hidden in a captured image 4A captured by the image capture device A, as illustrated in
However, it is conceivable that the relative positions of persons due to movement will undergo little change in a highly crowded environment since overtaking is difficult. Accordingly, there is a low need to track each person individually when movement trends of people are acquired from the number of people included in a crowd of people, the movement path of a crowd of people, the movement duration of a crowd of people, and the like.
A moving object group tracking system of the present exemplary embodiment therefore not only compares features of each person across captured images captured by each image capture device, but also collects and compares color information of plural persons nearby in the captured images for use in making associations. This increases the features employed in making associations, and enables movement trends of people to be acquired, even when individual associations are not achievable, since associations are made as a crowd of people.
Detailed description follows regarding an example of technology disclosed herein, with reference to the drawings.
As illustrated in
The image capture devices 10 capture captured images that include a crowd of people as an example of a moving object group. Note that an ID is allocated to each of the plural image capture devices 10. Further, the image capture device ID and an image capture timing representing the frame are allocated to the captured images captured by the image capture devices 10.
The moving object group tracking device 20 analyzes each captured image captured by the plural image capture devices 10 and determines the movement course of the crowd of people and the number of people included in the crowd of people. As illustrated in
In the moving object group tracking device 20 according to the present exemplary embodiment, a region 6a indicating a crowd of people moving in substantially the same direction is extracted from a captured image 6A captured by an image capture device A, as illustrated in
The moving object group tracking device 20 determines that a region having a high degree of similarity when comparing the region 6a against the region 6b is the same crowd of people. A movement trend of the crowd of people is then extracted from the image capture timings of each captured image in which the crowd of people was determined to be the same, and from the positional relationships between the image capture devices.
Thus, the moving object group tracking device 20 according to the present exemplary embodiment increases the features employed in making associations by comparing color information across image capture devices in ranges of regions that move in substantially the same direction in the captured images. Making associations in crowds of people across plural image capture devices is thereby implemented even in crowded environments.
The color feature information extraction section 22 acquires the captured images captured by the plural image capture devices 10. The color feature information extraction section 22 then associates the acquired captured images with the image capture device IDs and with the image capture timings representing the frames. Further, the color feature information extraction section 22 extracts color features from the captured image of each frame of each image capture device ID and stores the extracted color features in the color feature information storage section 24.
More specifically, the color feature information extraction section 22 divides entire captured image corresponding to each frame into blocks of a predetermined size (for example, 3×3 pixels). Next, as illustrated in
In the color feature information storage section 24, the color features extracted by the color feature information extraction section 22 are stored in a color feature information table in association with the image capture device ID and the image capture timing representing the frame.
For each image capture device ID, the feature extraction range selection section 26 extracts a feature extraction range for each captured image of each frame having the same image capture device ID, based on the color features of the color feature information table. The feature extraction ranges are regions in which the color features, which are an example of a coloring pattern, satisfy a predetermined similarity range, and are regions having movement in a corresponding direction over plural frames.
More specifically, the feature extraction range selection section 26 first sets a number of frames within a pre-set duration and reads color features of the captured image of each frame having the same image capture device ID from the color feature information storage section 24. The feature extraction range selection section 26 then extracts the feature extraction ranges by comparing the color features of the captured image of each frame. Note that in cases in which, for example, the image resolution differs across different image capture devices, the size width W and the size height H of the color features of the color feature information table of the color feature information storage section 24 are set in accordance with the resolution and the color features of the captured images are read.
In the present exemplary embodiment, regions in which the color feature information has similar arrangements and in which there is movement in a given direction are extracted as the feature extraction ranges from the captured image of each frame having the same image capture device ID. For example, the feature extraction range selection section 26 determines the flow of each small region in the captured image of a specific frame and collects the ranges of each small region that indicate a flow in substantially the same direction. The feature extraction range selection section 26 performs a survey to find whether or not a range expressing a color feature similar to the color features within the ranges expressing a flow in substantially the same direction is also present in a captured image of another frame.
For example, as illustrated in
The degree of similarity in color features is, for example, computed using the following method. For example, the degree of similarity in color between blocks corresponding to inside small regions can be calculated according to Equation (1) or Equation (2) below, where (R1, B1, G1) is the color information of a block of a small region of the frame 1 and (R2, B2, G2) is the color information of a block of a small region of the frame 2. Equation (1) is a calculation equation for calculating a value of correlation between the color information (R1, B1, G1) and the color information (R2, B2, G2), and Equation (2) is a calculation equation for calculating a distance between the color information (R1, B1, G1) and the color information (R2, B2, G2). The degree of similarity in the color features is computed such that the degree of similarity in colors between the blocks calculated for each block across the small regions in accordance with Equation (1) and Equation (2) below is the averaged value of the entire range of the small region.
For each small region included in the captured image of each frame, a flow representing what position each small region moved to in the next frame can be extracted by computing the degree of similarity in the color features. Each flow is expressed as a vector from the position of the small region at the movement origin to the position of the small region at the movement destination.
As illustrated in
Next, the feature extraction range selection section 26 collects similar flows from flow groups that are respective flows of each small region. The processing that collects the flows is performed in each frame.
For example, for the captured image of each frame, the feature extraction range selection section 26 selects one target flow and allocates a predetermined label. The feature extraction range selection section 26 then finds the degree of similarity in flow between the target flow and flows that are in the surroundings of the target flow. For example, values of correlations between the vectors representing the flows, values of distance between the vectors representing the flows, or the like can be employed as the degree of similarity of the flows. The flows in the surroundings of the target flow are set with a pre-set range.
The feature extraction range selection section 26 then allocates the same label as that of the target flow to flows in the surroundings of the target flow in cases in which the degree of similarity of the flow is higher than a predetermined threshold value. On the other hand, the feature extraction range selection section 26 does not allocate a label in cases in which the degree of similarity of the flow is the predetermined threshold value or less.
For the captured image of each frame, the feature extraction range selection section 26 repeatedly changes the target flow to be observed and performs the processing to allocate labels, and small regions corresponding to flows allocated with the same label are collected after determining the allocation of labels for all of the flows. For example, as illustrated in
Then, for the captured image of each frame, the feature extraction range selection section 26 checks whether a collection region similar to the collection region in which the small regions corresponding to the flow allocated the same label in the captured image are collected is present in a captured image of a different frame. The feature extraction range selection section 26 then extracts, as the feature extraction range, a collection region that is similar over plural frames.
More specifically, for the captured image of each frame, the feature extraction range selection section 26 computes a degree of similarity in color features between the collection region of the captured image and the collection regions of the captured images of other frames. As the computation method for the degree of similarity related to the color features in the collection regions, for example, the feature extraction range selection section 26 first overlaps collection regions of captured images of different frames and finds the degree of similarity in the color features in the overlapped ranges. The feature extraction range selection section 26 then extracts as the feature extraction range, which is a common region, the overlapped regions at the position having the highest value for the degree of similarity in the color features.
For example, as illustrated in
Note that plural collection regions are present in a single captured image of a frame in some cases. In such cases, the feature extraction range selection section 26 computes the degree of similarity in the color features for each pair of collection regions of different captured images. The overlapped region in the pair in which the degree of similarity in the color features has the highest value is then extracted as the common region.
The feature extraction range selection section 26 extracts regions common to all of the frames by making associations between the collection regions across all of the frames, and the feature extraction range selection section 26 extracts the common regions as the feature extraction ranges. The extracted feature extraction ranges extracted in this manner can be considered to be crowds of people moving in a specific direction.
The color feature generation section 28 reads, from the color feature information table of the color feature information storage section 24, color features corresponding to the feature extraction range selected by the feature extraction range selection section 26, and determines whether or not those color features are suitable for association across captured images having different image capture device IDs.
Then, in cases in which it was determined that the color features corresponding to the feature extraction range selected by the feature extraction range selection section 26 are not suitable for association, the color feature generation section 28 outputs a signal to the feature extraction range selection section 26 so that the feature extraction range is broadened. On the other hand, in cases in which it was determined that the color features corresponding to the feature extraction range selected by the feature extraction range selection section 26 are suitable for association, the color feature generation section 28 outputs the color features corresponding to the selected feature extraction range to the color feature comparison section 30 as associated color features. The associated color features are employed to make associations across captured images having different image capture device IDs in the color feature comparison section 30, described later.
A method is considered in which the variance of color features included in a feature extraction range is employed as an example of a determination method that determines whether or not the color features corresponding to the feature extraction range are suitable for association across captured images having different image capture device IDs. For example, in cases in which the value of a variance of color features included in the feature extraction range is a particular value or less, few features are included in the extracted feature extraction range and the extracted feature extraction range is conceivably not suitable for association. The color feature generation section 28 thus determines that the feature extraction range is not suitable for association across captured images having different image capture device IDs in cases in which the value of the variance of the color features included in the feature extraction ranges selected by the feature extraction range selection section 26 is the particular value or less.
Further, another example of a determination method that determines whether or not the color features corresponding to the feature extraction range are suitable for association across captured images having different image capture device IDs is a method that compares color features within plural feature extraction ranges extracted as the common regions in each frame within a predetermined duration. In this method, determination is made as to whether or not the color features within the specific feature extraction range are similar to the color features within another feature extraction range. In cases in which the color features within the specific feature extraction range are similar to the color features within another feature extraction range, it is clear that color features within the specific feature extraction range are present in various captured images. Employing color features within that specific feature extraction range to make associations is therefore conceivably highly likely to result in erroneous associations. Accordingly, for each selected combination of feature extraction ranges, the color feature generation section 28 determines that the feature extraction range is not suitable for association in cases in which the degree of similarity in the color features included in the feature extraction ranges in the combination is a particular value or higher.
Then, in cases in which it was determined that the feature extraction ranges selected by the feature extraction range selection section 26 are not suitable for association, the color feature generation section 28 outputs a signal to the feature extraction range selection section 26 so that a larger feature extraction range is set.
When the feature extraction range selection section 26 acquires the signal output from the color feature generation section 28, the feature extraction range selection section 26 sets a feature extraction range that is larger than the feature extraction range set in the processing the previous time.
For example, as an example of processing to set a larger feature extraction range, the feature extraction range selection section 26 makes the number of frames read from the color feature information table of the color feature information storage section 24 smaller.
As illustrated at the left side of
The color feature comparison section 30 compares associated color features obtained by the color feature generation section 28 across captured images having different image capture device IDs, and detects the common inclusion of a crowd of people in image regions of different captured images in accordance with a similarity evaluation of color features across captured images having different image capture device IDs.
The color feature comparison section 30 then averages the degree of color similarity computed for each position within the associated color features over the entire range of the associated color features, and sets the obtained average as the degree of similarity of the associated color features between the associated color features 12A and the associated color features 12B. The color feature comparison section 30 then determines that the pair of associated color features of the associated color features 12A and the associated color features 12B are the same in cases in which the degree of similarity of the associated color features is a predetermined threshold value or higher.
Note that the color feature comparison section 30 selects an associated color feature other than the associated color features having the highest value of the degree of similarity in associated color features in cases in which there are plural other associated color features present that have a degree of similarity in associated color features of a predetermined threshold value or higher with specific associated color features. The color feature comparison section 30 then determines that the pair of the specific associated color features and the other selected associated color features are the same.
However, in the procedure of the present exemplary embodiment, the size of each associated color feature differs across the captured images of each image capture device ID in some cases. In such cases, the color feature comparison section 30 finds the degree of similarity of the associated color features while moving the associated color feature having the smaller size within the associated color features of the associated color feature having the larger size, out of the pair of associated color features obtained from captured images having different image capture device IDs. The color feature comparison section 30 then sets the maximum value out of the found degree of similarities in the associated color features as the degree of similarity of the associated color features in the pair. For example, as illustrated in
Further, although plural persons are collected and compared in the present exemplary embodiment, a person present in a captured image captured by one image capture device goes out of sight and not is present in a captured image captured by another image capture device in some cases. Further, cases in which the feature extraction ranges are different ranges across image capture devices due to errors in extraction of flow from the captured image or the like are also conceivable.
Therefore, for example, as illustrated in
However, in cases in which degrees of similarity computed from a portion of the region of the associated color feature are employed, the comparison result for an associated color feature corresponding to a wider range has higher reliability that the comparison result for an associated color feature corresponding to a smaller range. Weighting is therefore performed such that the larger the region of the portion of the associated color feature, the higher the degree of similarity of the associated color feature.
For example, in the example illustrated in
Accordingly, the color feature comparison section 30 performs weighting on the degree of similarity of the associated color features computed when the associated color features are completely overlapped with each other as illustrated in
The tracking result generation section 32 computes the number of moving people included in the crowd of people using the size of the image region as the weight in cases in which an image region in which the crowd of people is commonly included across captured images having different image capture device IDs has been detected by the color feature comparison section 30. The image region in which the crowd of people is included is detected across captured images of each frame having different image capture device IDs. Thus, for example, in cases in which image regions that include the same crowd of people have been detected between a captured image captured at timing t by the image capture device A and a captured image captured at timing t+10 by the image capture device B, it is clear the crowd of people has moved from the image capture device A to the image capture device B in 10 seconds.
In cases in which the inclusion of the crowd of people has been detected, the tracking result generation section 32 identifies the movement course of the crowd of people in accordance with the detection result of the image region in which the inclusion of the crowd of people was detected. More specifically, the tracking result generation section 32 identifies the movement course of the crowd of people from position information regarding the pair of the image capture devices corresponding to the pair of image capture device IDs based on the pair of image capture device IDs of the captured images in which the associated color features have been associated.
Further, the tracking result generation section 32 computes a movement duration of the crowd of people across the different image capture devices in accordance with an extraction result of the image region across the different image capture device IDs. The movement duration of the crowd of people between different image capture devices is found in accordance with a difference between image capture timings of the pair of captured images in which the associated color features have been associated.
For example, as illustrated in
As the generation method of the table illustrated in
In cases in which a movement amount of the crowd of people is computed, when the size of the region of the associated color features is large, it is clear that a greater number of persons have moved across the image capture devices, and the tracking result generation section 32 therefore finds the number of moving people included in the crowd of people using the size of the region of the associated color features as a weight.
More specifically, the tracking result generation section 32, per pair of image capture device IDs in which the associated color features have been associated, counts, for example, each pixel as one person, and computes the number of moving people included in the crowd of people in accordance with the number of pixels in the associated color features within the captured image. This enables the number of moving people included in the crowd of people to be found using the size of the region of the associated color features as a weight.
The tracking result generation section 32 then stores, in a location corresponding to the movement duration of the table illustrated in
For example, in the example of
Accordingly, estimating the number of moving persons included in a crowd of people and the movement course by finding the number of moving people and the movement course for a crowd of people per duration range enables tracking of persons as a result.
The display section 34 displays the number of moving people and the movement course of the crowd of people obtained by the tracking result generation section 32 as a result.
The moving object group tracking device 20 may, for example, be implemented by a computer 50 illustrated in
The storage section 53 may be implemented by a hard disk drive (HDD), solid state drive (SSD), flash memory, or the like. A moving object group tracking program 60 for causing the computer 50 to function as the moving object group tracking device 20 is stored in the storage section 53, which serves as a recording medium. The moving object group tracking program 60 includes a color feature information extraction process 62, a feature extraction range selection process 63, a color feature generation process 64, a color feature comparison process 65, a tracking result generation process 66, and a display process 67. The storage section 53 further includes a color feature information storage region 69 that stores the information included in the color feature information storage section 24.
The CPU 51 reads the moving object group tracking program 60 from the storage section 53, expands the moving object group tracking program 60 into the memory 52, and sequentially executes the processes included in the moving object group tracking program 60. The CPU 51 operates as the color feature information extraction section 22 illustrated in FIG. 6 by executing the color feature information extraction process 62. The CPU 51 also operates as the feature extraction range selection section 26 illustrated in
Note that the functionality implemented by the moving object group tracking program 60 may be implemented by, for example, a semiconductor integrated circuit, and more specifically, by an application specific integrated circuit (ASIC) or the like.
Next, the operation of the moving object group tracking system 100 according to an exemplary embodiment is described. For example, in the moving object group tracking system 100, the moving object group tracking processing illustrated in
At step S100 of the moving object group tracking processing illustrated in
Next, at step S102, the color feature information extraction section 22 stores, in the color feature information table of the color feature information storage section 24, the color features of the captured image of each frame of each image capture device ID extracted at step S100 above.
At step S103, the feature extraction range selection section 26 sets the plural frames that are targets for extraction of the feature extraction ranges.
At step S104, for each image capture device ID, the feature extraction range selection section 26 reads the color features of the captured image of the plural frames set at step S103 above or at step S108 the previous time from the color feature information table. Then, based on the color features read from the color feature information table, the feature extraction range selection section 26 then extracts feature extraction ranges, which are regions in which the color features over plural frames satisfy a predetermined similarity range and are regions in which the movement is in a corresponding direction over plural frames. More specifically, for the captured image of each frame, the feature extraction range selection section 26 computes the degree of similarity in the color features across the collection region of that captured image and the collection region of a captured image of another frame. The feature extraction range selection section 26 then extracts, as feature extraction ranges that are common regions, overlapped regions in the position where the degree of similarity in the color features is the highest value.
At step S106, the color feature generation section 28 determines whether or not the feature extraction ranges selected by the feature extraction range selection section 26 are suitable for association across the captured images having different image capture device IDs. More specifically, the color feature generation section 28 computes the variance of the color features included in the feature extraction range extracted at step S104 above. Then, in cases in which the value of the variance is the particular value or less, the color feature generation section 28 determines that the feature extraction ranges are not suitable for association and processing transitions to step S108. On the other hand, in cases in which the value of the variance is greater than the particular value, the color feature generation section 28 determines that the feature extraction ranges are suitable for association and outputs the color features corresponding to the feature extraction ranges extracted at step S104 above as the associated color features, and processing proceeds to step S110.
At step S108, the feature extraction range selection section 26 sets a number of frames that is smaller than the number of frames set at step S103 above or at step S108 the previous time.
At step S110, the color feature comparison section 30 compares the associated color features output at step S106 above across captured images having different image capture device IDs. The color feature comparison section 30 performs a similarity evaluation of the color features across captured images having different image capture device IDs, and, for each pair of associated color features output at step S106 above, computes the degree of similarity of the associated color features across the captured images having different image capture device IDs. The color feature comparison section 30 then, for each captured image having a different image capture device ID, determines that the pair of associated color features are the same across the captured images having difference image capture device IDs in cases in which the degree of similarity of the associated color features is the predetermined threshold value or higher. The color feature comparison section 30 then detects that a common crowd of people is included the region of the associated color features that were determined to be the same. Note that in cases in which there are plural other associated color features for which the degree of similarity in the associated color feature with specific associated color feature is a predetermined threshold value or higher, the color feature comparison section 30 selects the other associated color feature that has the highest value of the degree of similarity in the associated color feature. The color feature comparison section 30 then determines that the pair of the specific associated color feature and the selected other associated color feature are the same.
At step S112, for each pair between the captured images having image capture device IDs detected to include a crowd of people at step S110 above, the tracking result generation section 32 computes the number of moving people included in the crowd of people using the size of the region as the weight for the number of moving people included in the crowd of people. Further, for each pair between the captured images having image capture device IDs detected to include a crowd of people at step S110 above, the tracking result generation section 32 identifies the movement course of the crowd of people in accordance with the detection result of the region detected to include the crowd of people.
At step S114, the display section 34 displays, as the result, the number of moving people and the movement course of the crowd of people obtained at step S112 above.
As described above, the moving object group tracking device according to the exemplary embodiment analyzes each captured image captured by the plural image capture devices and extracts image regions in which the color features satisfy a predetermined similarity range and movement is in a corresponding direction over plural frames. Common crowds of people included in the image regions of the image capture devices captured by different image capture devices are then detected in accordance with the similarity evaluation of the regions of the images captured by the plural image capture devices. This enables persons to be tracked from images in cases in which tracking of an object is performed across images captured by plural difference image capture devices and plural objects are included in the images.
Further, a crowd of people can be tracked with greater precision by broadening the feature extraction range in cases in which color features corresponding the feature extraction range are not suitable for association across captured images having different image capture device IDs.
Further, in cases in which it is detected that a crowd of people is included in an image region, the number of persons included in a crowd of people along a movement path can be estimated by computing the number of moving people in the crowd of people using the size of the image region as a weight for the number of moving people included in the crowd of people. Further, the movement course of a crowd of people can be identified in accordance with an extraction result of the image regions.
Further, the moving object group tracking device according to the exemplary embodiment enables a movement duration of people to be acquired even when associations are not achieved for individuals, since associations are made across images using crowd of people, which is a collective. In particular, capturing images when persons are overlapping each other in crowded environments and the like enables movement durations of people across image capture devices to be acquired even in cases in which division into respective person regions is difficult, since associations are made without dividing into regions.
Further, even when distinctive persons are not present within the captured images, movement durations can be estimated with high precision since the features of plural persons within the images are employed.
Further, movement trends of people (for example, statistical quantities related to movement courses, movement durations, and the like) can be ascertained from captured images captured by image capture devices and employed in various applications such as in safety announcements to alleviate crowding and in marketing.
Further, movement trends of people in a broad range can be obtained by coordinating captured images captured by the plural image capture devices, thereby enabling effective policies to be made from more information. This enables, for example, effective policies to be made with regard to, for example, relatedness between shops and leveling out of flows of people in shopping mall areas overall.
Note that in the above, a mode was described in which the moving object group tracking program 60 is pre-stored (installed) to the storage section 53. However, there is no limitation thereto. The program according to technology disclosed herein may be provided in a mode recorded on a recording medium such as a CD-ROM, a DVD-ROM, USB memory, or the like.
Next, modified examples of the exemplary embodiment are described.
In the present exemplary embodiment, an example of a case in which the moving object group is a crowd of people was described. However, there is no limitation thereto. Another moving object group may serve as the target. For example, the moving object group may be a group of vehicles.
Further, in the present exemplary embodiment, an example of a case in which image regions are extracted using color features, which serve as an example of a coloring pattern, was described. However, there is no limitation thereto. For example, image regions may be extracted using patterns of edge features, which is an example of a pattern obtained from another feature.
Further, in the present exemplary embodiment, an example has been described of a case in which the number of frames read from the color feature information table is made smaller and a larger feature extraction range is set in cases in which feature extraction ranges are not suitable for association across captured images having different image capture device IDs. However, there is no limitation thereto. For example, the feature extraction range may be expanded by a predetermined number of pixels and the feature extraction range set larger in cases in which the feature extraction ranges are not suitable for association across captured images having different image capture device IDs.
Further, in the present exemplary embodiment, an example has been described of a case in which variances of color features included in feature extraction ranges are employed when determining whether or not the selected feature extraction ranges are suitable for association across captured images having different image capture device IDs. However, there is no limitation thereto. For example, as described above, color features within plural feature extraction ranges may be compared and determination made as to whether or not the feature extraction ranges are suitable for association across captured images having different image capture device IDs.
Further, out of degrees of color similarity corresponding respective positions within associated color features, the color feature comparison section 30 may determine blocks having high degrees of color similarity as being the same and may performing track that regards these blocks as being the same person.
When capturing a state in which objects are crowded together, an image including plural objects will be captured by the image capture device. For example, when tracking persons serving as examples of objects, a portion of each person may be hidden as a result of overlap between persons in the image caused by crowding, and feature amounts are liable to be similar for each person since the feature amount obtained for each person is reduced. This makes identification of the tracking-target person difficult, such that the tracking-target person is not trackable.
One aspect of technology disclosed herein enables an object to be tracked from images in cases in which plural objects are included in the images.
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2016-160197 | Aug 2016 | JP | national |