The present invention relates to a technology that detects an object included in an image.
It is known that by performing learning using large amounts of pattern data, the performance of a recognizer can be improved. Tuning is also performed to obtain a recognizer suited to each environment from a base recognizer. Moreover, methods of improving the recognition accuracy depending on different environments have been variously proposed. For example, Patent Reference 1 discloses a pattern recognition device that performs a recognition processing in accordance with an environment where text is written. The pattern recognition device performs the recognition processing by calling any one or more recognizers from among a plurality of registered recognizers according to the state of a processing target extracted from an input image.
Also, as another measure for improving recognizer performance, a method has been proposed in which a plurality of recognizers with different characteristics are constructed, and an overall determination is made on the basis of outputs therefrom. For example, Patent Reference 2 discloses an obstacle detection device that makes a final determination on the basis of determination results of a plurality of determination units that determine whether or not an obstacle exists.
Patent Reference 1: Japanese Patent Application Laid-Open under No. 2007-058882
Patent Reference 2: Japanese Patent Application Laid-Open under No. 2019-036240
In the above techniques, it is assumed that the accuracy of the plurality of recognition devices or determination devices is substantially the same. For this reason, if the accuracy is different among the plurality of recognition devices or determination devices, the accuracy of the final result may be lowered in some cases.
One object of the present invention is to provide an object detection device that enables highly accurate object detection according to the inputted image by using a plurality of recognizers of different characteristics.
In order to solve the above problem, according to an example aspect of the present invention, there is provided an object detection device comprising:
a plurality of object detection units configured to output a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data;
a weight computation unit configured to use weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data, the weight being used when the scores outputted by the plurality of object detection units are merged;
a merging unit configured to merge the scores outputted by the plurality of object detection units for each partial region according to the weight computed by the weight computation unit;
a loss computation unit configured to compute a difference between a ground truth label of the image data and the score merged by the merging unit as a loss; and
a parameter correction unit configured to correct the weight computation parameters so as to reduce the loss.
According to another example aspect of the present invention, there is provided an object detection device learning method comprising:
outputting, from a plurality of object detection units, a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data;
using weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data, the weight being used when the scores outputted by the plurality of object detection units are merged;
merging the scores outputted by the plurality of object detection units for each partial region according to the computed weight;
computing a difference between a ground truth label of the image data and the merged score as a loss; and
correcting the weight computation parameters so as to reduce the loss.
According to still another example aspect of the present invention, there is provided a recording medium storing a program causing a computer to execute an object detection device learning process comprising:
outputting, from a plurality of object detection units, a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data;
using weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data, the weight being used when the scores outputted by the plurality of object detection units are merged;
merging the scores outputted by the plurality of object detection units for each partial region according to the computed weight;
computing a difference between a ground truth label of the image data and the merged score as a loss; and
correcting the weight computation parameters so as to reduce the loss.
According to the present invention, by combining a plurality of recognizers for object detection with different characteristics, highly accurate object detection according to the input image becomes possible.
Next, a first example embodiment of the present invention will be described.
(Hardware Configuration)
The interface 2 communicates with an external device. Specifically, the interface 2 is used to input image data to be subjected to object detection or image data for learning from an outside source, and to output an object detection result to an external device.
The processor 3 is a computer such as a CPU (Central Processing Unit) or a CPU and a GPU (Graphics Processing Unit), and controls the object detection device 10 as a whole by executing a program prepared in advance. The memory 4 includes ROM (Read Only Memory), RAM (Random Access Memory), and the like. The memory 4 stores various programs to be executed by the processor 3. The memory 4 is also used as a work memory when the processor 3 executes various processing.
The recording medium 5 is a non-volatile, non-transitory recording medium such as a disk-shaped recording medium or a semiconductor memory, and is configured to be removably attachable to the object detection device 10. The recording medium 5 records various programs executed by the processor 3. When the object detection device 10 executes a learning processing, a program recorded in the recording medium 5 is loaded into the memory 4 and executed by the processor 3.
The database 6 stores image data for learning that is used in the learning processing by the object detection device 10. The image data for learning includes ground truth labels. Note that in addition to the above, the object detection device 10 may also be provided with an input device such as keyboard and mouse, a display device, and the like.
(Functional Configuration for Learning)
Next, a functional configuration of the object detection device 10 for learning will be described.
The learning step of the object detection device 10 optimizes the internal parameters for weight computation (hereinafter referred to as “weight computation parameters”) in the weight computation unit 12. Note that the first object detection unit 13 and the second object detection unit 14 are pre-trained, and do not undergo learning in the learning step.
Image data is inputted into the image input unit 11. The image data is image data for learning, and is taken in an area to be subjected to object detection. As described above, a ground truth label indicating an object included in the image is prepared in advance for each image data.
The first object detection unit 13 has a configuration similar to a neural network for object detection by deep learning, such as Single Shot Multibox Detector (SSD), RetinaNet, or Faster Regional Convolutional Neural Network (Faster-RCNN).
However, the first object detection unit 13 does not perform a non-maximum suppression (NMS) processing to output detected objects with their scores and coordinate information in a list format or the like, and simply outputs score information and coordinate information for a recognition target object computed for each anchor box before the NMS processing. Here, all partial regions inspected for the presence or absence of a recognition target object are referred to as “anchor boxes”.
The number of anchors depends on the structure and size of the neural network. As an example,
The output network 901 illustrated is set with respect to a single layer of a feature pyramid net, and K-dimensional score information and 4-dimensional coordinate information are outputted similarly with respect to the other layers of the feature pyramid net. Hereinafter, the number of anchors set with respect to all layers of the feature pyramid net is designated “Na”. The score information and coordinate information for the same anchor are saved in a predetermined memory location of a memory for storing the information, so as to be easily associated with each other. Note that as described above, the first object detection unit 13 is pre-trained so that the parameters are fixed, and does not undergo learning in the learning step of the object detection device 10.
The second object detection unit 14 is similar to the first object detection unit 13 and has the same model structure. However, the first object detection unit 13 and the second object detection unit 14 have different parameters in the respective internal networks due to such factors that the training data or the initial values of the parameters are different when learning was performed, and consequently have different recognition characteristics.
The weight computation unit 12 is configured by a deep neural network or the like that is applicable to regression problems, such as ResNet (Residual Network). The weight computation unit 12 determines weights with respect to image data inputted into the image input unit 11 when merging the score information and coordinate information outputted by the first object detection unit 13 and the second object detection unit 14, and outputs information indicating each of the weights to the product-sum unit 15. Basically, the number of dimensions of the weights is equal to the number of object detection units used. In this case, the weight computation unit 12 preferably computes weights such that the sum of the weight for the first object detection unit 13 and the weight for the second object detection unit 14 is “1”. For example, the weight computation unit 12 may set the weight for the first object detection unit 13 to “α”, and set the weight for the second object detection unit 14 to “1−α”. With this arrangement, an averaging processing in the product-sum unit 15 can be simplified. Note that in the case where there are two parameters related to a single object in the object detection units (for example, a parameter indicating the probability of a certain object and a parameter indicating the improbability of a certain object), the number of dimensions of the weights is double the number of object detection units used.
The product-sum unit 15 computes the product-sums of the score information and the coordinate information outputted by the first object detection unit 13 and the second object detection unit 14 for respectively corresponding anchors on the basis of the weights outputted by the weight computation unit 12, and then calculates an average value. Note that the product-sum operation on the coordinate information is only performed on anchors for which the existence of a recognition target object is indicated by a ground truth label, and calculation is unnecessary for all other anchors. The average value is computed for each anchor and each recognition target object, and has Na×(k+4) dimensions. Note that the product-sum unit 15 is one example of a merging unit according to the present invention.
The ground truth label storage unit 18 stores ground truth labels with respect to the image data for learning. Specifically, the ground truth label storage unit 18 stores class information and coordinate information about a recognition target object existing at each anchor in an array for each anchor as the ground truth labels. The ground truth label storage unit 18 stores class information indicating that a recognition target object does not exist and coordinate information in the storage areas corresponding to anchors where a recognition target object does not exist. The class information includes a class code indicating the type of object and score information indicating the probability that an object indicated by the class code exists. Note that in many cases, the original ground truth information with respect to the image data for learning is text information indicating the type and rectangular region of a recognition target object appearing in an input image, but the ground truth labels stored in the ground truth label storage unit 18 are data obtained by converting such ground truth information into class information and coordinate information for each anchor.
For example, for an anchor that overlaps by a predetermined threshold or more with the rectangular region in which a certain object appears, the ground truth label storage unit 18 stores a value of 1.0 indicating the score of the object as the class information at the location of the ground truth label expressing the score of the object, and stores relative quantities of the position (an x-coordinate offset from the left edge, a y-coordinate offset from the top edge, a width offset, and a height offset) of the rectangular region in which the object appears with respect to a standard rectangular position of the anchor as the coordinate information. In addition, the ground truth label storage unit 18 stores a value indicating that an object does not exist at the location of the ground truth label expressing the scores for other objects. Also, for an anchor that does not overlap by a predetermined threshold or more with the rectangular region in which a certain object appears, the ground truth label storage unit 18 stores a value indicating that an object does not exist at the location of the ground truth label where the score and coordinate information of the object are stored. For a single anchor, the class information is k-dimensional, and the coordinate information is 4-dimensional. For all anchors, the class information is (Na×k)-dimensional and the coordinate information is (Na×4)-dimensional. To this conversion, it is possible to apply methods used by deep neural network programs for object detection tasks and generally available to the public.
The loss computation unit 17 checks the score information and coordinate information of (Na×(k+4))-dimension outputted by the product-sum unit 15 with the ground truth labels stored in the ground truth label storage unit 18 to compute a loss value. Specifically, the loss computation unit 17 computes an identification loss related to the score information and a regression loss related to the coordinate information. The (Na×(k+4))-dimensional average value outputted by the product-sum unit 15 is defined in the same way as the score information and coordinate information that the first object detection unit 13 outputs for each anchor and each recognition target object. Consequently, the loss computation unit 17 can compute the value of the identification loss by a method that is exactly the same as the method of computing the identification loss with respect to the output of the first object detection unit 13. The loss computation unit 17 computes the cumulative differences of the score information with respect to all anchors as the identification loss. Also, for the regression loss, the loss computation unit 17 computes the cumulative differences of the coordinate information only with respect to anchors where an object exists, and does not consider the difference of the coordinate information with respect to anchors where no object exists.
Note that deep neural network learning using identification loss and regression loss is described in the following document, which is incorporated herein as a reference.
“Learning Efficient Object Detection Models with Knowledge Distillation”, NeurIPS 2017
The parameter correction unit 16 corrects the parameters of the network in the weight computation unit 12 so as to reduce the loss computed by the loss computation unit 17. At this time, the parameter correction unit 16 fixes the parameters of the networks in the first object detection unit 13 and the second object detection unit 14, and only corrects the parameters of the weight computation unit 12. The parameter correction unit 16 can compute parameter correction quantities by ordinary error backpropagation. By learning the parameters of the weight computation unit 12 in this way, it is possible to construct an object detection device that optimally computes the product-sums of the outputs from the first object detection unit 13 and the second object detection unit 14 to make an overall determination.
Next, operations by the object detection device 10 for learning will be described.
First, image data for learning is inputted into the image input unit 11 (step S11). The first object detection unit 13 performs object detection using the image data, and outputs score information and coordinate information about recognition target objects in the images for each anchor and each recognition target object (step S12). Similarly, the second object detection unit 14 performs object detection using the image data, and outputs score information and coordinate information about recognition target objects in the images for each anchor and each recognition target object (step S13). Also, the weight computation unit 12 receives the image data and computes weights with respect to each of the outputs from the first object detection unit 13 and the second object detection unit 14 (step S14).
Next, the product-sum unit 15 multiplies the score information and coordinate information about recognition target objects outputted by the first object detection unit 13 and the score information and coordinate information about recognition target objects outputted by the second object detection unit 14 by the respective weights computed by the weight computation unit 12, and adds the results together to output the average value (step S15). Next, the loss computation unit 17 checks the difference between the obtained average value and the ground truth labels, and computes the loss (step S16). Thereafter, the parameter correction unit 16 corrects the weight computation parameters in the weight computation unit 12 to reduce the value of the loss (step S17).
The object detection device 10 repeats the above steps S11 to S17 while a predetermined condition holds true, and then ends the process. Note that the “predetermined condition” is a condition related to the number of repetitions, the degree of change in the value of the loss, or the like, and any method widely adopted as a learning procedure for deep learning can be used.
As described above, according to the object detection device 10 of the first example embodiment, the weight computation unit 12 predicts what each object detection unit is good or poor at with respect to an input image to optimize the weights, multiplies the weights by the output from each object detection unit, and averages the results. Consequently, a final determination can be made with high accuracy compared to a standalone object detection unit. For example, in the case where the first object detection unit 13 is good at detecting a pedestrian walking alone and the second object detection unit 14 is good at detecting pedestrians walking in a group, if a person walking alone happens to appear in an input image, the weight computation unit 12 assigns a larger weight to the first object detection unit 13. Additionally, the parameter correction unit 16 corrects the parameters of the weight computation unit 12 such that the weight computation unit 12 computes a large weight for the object detection unit that is good at recognizing the image data for learning.
(Functional Configuration for Inference)
Next, a functional configuration of an object detection device for inference will be described.
As illustrated in
The maximum value selection unit 19 performs an NMS process on the (Na×k)-dimensional score information outputted by the product-sum unit 15 to identify the type of a recognition target object, specifies the position from the coordinate information corresponding to the anchor, and outputs an object detection result. The object detection result includes the type and position of each recognition target object. With this arrangement, it is possible to obtain an object detection result when the outputs from the first object detection unit 13 and the second object detection unit 14 are optimally merged to make an overall determination.
Next, operations by the object detection device 10x for inference will be described.
First, image data for inference is inputted into the image input unit 11 (step S21). The first object detection unit 13 performs object detection using the image data, and outputs score information and coordinate information about recognition target objects in the images for each anchor and each recognition target object (step S22). Similarly, the second object detection unit 14 performs object detection using the image data, and outputs score information and coordinate information about recognition target objects in the images for each anchor and each recognition target object (step S23). Also, the weight computation unit 12 receives the image data and computes weights with respect to each of the outputs from the first object detection unit 13 and the second object detection unit 14 (step S24).
Next, the product-sum unit 15 multiplies the score information and coordinate information about recognition target objects outputted by the first object detection unit 13 and the score information and coordinate information about recognition target objects outputted by the second object detection unit 14 by the respective weights computed by the weight computation unit 12, and adds the results together to output the average value (step S25). Finally, the maximum value selection unit 19 performs the NMS processing on the average value, and outputs the type and position of the recognition target object as an object detection result (step S26).
(Modifications)
The following modifications can be applied to the first example embodiment described above.
(1) In the first example embodiment described above, learning is performed using score information and coordinate information outputted by each object detection unit. However, learning may also be performed using only score information, without using coordinate information.
(2) In the first example embodiment described above, the two object detection units of the first object detection unit 13 and the second object detection unit 14 are used. However, using three or more object detection units poses no problem in principle. In this case, it is sufficient if the dimensionality (number) of weights outputted by the weight computation unit 12 is equal to the number of object detection units.
(3) Any deep learning method for object detection may be used as the specific algorithms forming the first object detection unit 13 and the second object detection unit 14. Moreover, the weight computation unit 12 is not limited to deep learning for regression problems, and any function that can be learned by error backpropagation may be used. In other words, any error function that is partially differentiable by the parameters of a function that computes weights may be used.
(4) Additionally, while the first example embodiment described above is directed to the object detection device, it is not limited to the detection of objects, and it may also be configured as an event detection device that outputs event information and coordinate information about an event occurring in an image. An “event” refers to something like a behavior, movement, or gesture by a predetermined person or a natural phenomenon such as a mudslide, an avalanche, or a rise in the water level of a river, for example.
(5) Also, in the first example embodiment described above, while object detection units having the same model structure are used as the first object detection unit 13 and the second object detection unit 14, different models may also be used. In such a case, it is necessary to devise associations in the product-sum unit 15 between the anchors of both models corresponding to substantially the same positions. This is because the anchors of different models do not match exactly. As a practical implementation, each anchor set in the second object detection unit 14 may be associated with one of the anchors set in the first object detection unit 13, a weighted average may be calculated for each anchor set in the first object detection unit 13, and score information and coordinate information may be outputted for each anchor and each recognition target object set in the first object detection unit 13. The anchor associations may be determined by calculating image regions corresponding to anchors (rectangular regions where an object exists) and associating the anchors for which image regions appropriately overlap each other.
Next, a second example embodiment of the present invention will be described. Note that the object detection device 20 for learning and the object detection device 20x for inference described below are both achieved with the hardware configuration illustrated in
(Functional Configuration for Learning)
The per-anchor weight computation unit 21 computes weights with respect to the first object detection unit 13 and the second object detection unit 14 for each anchor set in image data inputted into the image input unit 11 on the basis of the image data, and outputs the computed weights to the product-sum unit 15. Here, whereas the weight computation unit 12 according to the first example embodiment sets a single weight for the image as a whole with respect to the output of each object detection unit, the per-anchor weight computation unit 21 according to the second example embodiment computes a weight for each anchor with respect to the output of each object detection unit, that is, for each partial region of the image. Provided that Na is the number of anchors set in the image data and Nf is the number of object detection units, the number of dimensions of the information indicating the weight outputted by the per-anchor weight computation unit 21 is Na×Nf dimensions. The per-anchor weight computation unit 21 can be configured by a deep neural network applicable to multidimensional regression problems or the like. Also, the per-anchor weight computation unit 21 may include a network having a structure that averages the weights corresponding to nearby anchors, such that nearby anchors for respective object detection units have weights that are as close to each other as possible.
The product-sum unit 15 computes the product-sums of the score information and the coordinate information outputted for each anchor and each recognition target object by each of the first object detection unit 13 and the second object detection unit 14 on the basis of the weights for each object detection unit and each anchor outputted by the per-anchor weight computation unit 21 while associating the same information with each other, and then calculates an average value. The number of dimensions of the average value is Na×(k+4) dimensions, which is the same as the first example embodiment.
The per-anchor parameter correction unit 22 corrects the weight computation parameters for each object detection unit and each anchor in the per-anchor weight computation unit 21 so as to reduce the loss computed by the loss computation unit 17. At this time, like the first example embodiment, the parameters of the networks in the first object detection unit 13 and the second object detection unit 14 are fixed, and the per-anchor parameter correction unit 22 only corrects the parameters of the per-anchor weight computation unit 21. The parameter correction quantities can be computed by ordinary error backpropagation.
During learning, the object detection device 20 according to the second example embodiment executes the processing basically similar to the learning processing according to the first example embodiment illustrated in
(Functional Configuration for Inference)
A configuration of an object detection device for inference according to the second example embodiment will be described.
During inference, the object detection device 20x according to the second example embodiment executes the processing basically similar to the inference processing according to the first example embodiment illustrated in
In the second example embodiment, weights are computed on the basis of inputted image data by estimating the probability of the output from each object detection unit for each anchor, i.e., for each location, and the weights are used to calculate a weighted average of the outputs from the object detection units. Consequently, the outputs from a plurality of object detection units can be used to make a more accurate final determination. For example, in the case where the first object detection unit 13 is good at detecting a pedestrian walking alone and the second object detection unit 14 is good at detecting pedestrians walking in a group, if a person walking alone and persons walking in a group both appear in an inputted image, the per-anchor weight computation unit 21 outputs weights that give more importance on the output from the first object detection unit 13 for the anchors corresponding to the region near the position of the person walking alone and give more importance on the output from the second object detection unit 14 for the anchors corresponding to the region near the position of the persons walking in a group. In this way, a more accurate final determination becomes possible. Furthermore, the per-anchor parameter correction unit 22 can correct the parameters for each partial region of the image such that the per-anchor weight computation unit 21 outputs weights that give more importance on the output from the object detection unit that is good at recognizing the image data for learning.
(Modifications)
The modifications (1) to (5) of the first example embodiment described above can also be applied to the second example embodiment. Furthermore, the following modification (6) can be applied to the second example embodiment.
(6) In the second example embodiment described above, the per-anchor weight computation unit 21 computes optimal weights for each anchor. However, if the object detection units have different binary classifiers for each class like in RetinaNet for example, the weights may be changed for each class rather than for each anchor. In this case, a per-class weight computation unit may be provided instead of the per-anchor weight computation unit 21, and a per-class parameter correction unit may be provided instead of the per-anchor parameter correction unit 22. Provided that Na is the number of anchors set in the image data and Nf is the number of object detection units, the number of dimensions of the weights outputted by the per-anchor weight computation unit 21 is Na×Nf dimensions. On the other hand, provided that the number of classes is Nc dimensions, the number of dimensions of the weights outputted by the per-class weight computation unit is Nc×Nf dimensions. To learn the parameters of the per-class weight computation unit with the per-class parameter correction unit, it is sufficient to apply backpropagation so as to minimize the loss from the output layer neuron side as usual. According to this configuration, in the case where the respective object detection units are good at detecting different classes, for example, it is possible to compute different optimal weights for each class.
Next, a third example embodiment of the present invention will be described. The third example embodiment uses shooting environment information about the image data to compute weights for each object detection unit. Note that the object detection device 30 for learning and the object detection device 30x for inference described below are both achieved with the hardware configuration illustrated in
(Functional Configuration for Learning)
Shooting environment information is inputted into the prediction loss computation unit 32. The shooting environment information is information indicating the environment where the image data inputted into the image input unit 11 was shot. For example, the shooting environment information is information such as (a) an indication of the installation location (indoors or outdoors) of the camera used to acquire the image data, (b) the weather at the time (sunny, cloudy, rainy, or snowy), (c) the time (daytime or nighttime), and (d) the tilt angle of the camera (0-30 degrees, 30-60 degrees, or 60-90 degrees).
The weight computation/environment prediction unit 31 uses weight computation parameters to compute weights with respect to the first object detection unit 13 and the second object detection unit 14, and at the same time also uses parameters for predicting the shooting environment (hereinafter referred to as “shooting environment prediction parameters”) to predict the shooting environment of the inputted image data, and generate and output predicted environment information to the prediction loss computation unit 32. For example, if the four types of information (a) to (d) mentioned above are used as the shooting environment information, the weight computation/environment prediction unit 31 expresses an attribute value indicating the information of each type in one dimension, and outputs a four-dimensional value as the predicted environment information. The weight computation/environment prediction unit 31 uses some of the calculations in common when computing the weights and the predicted environment information. For example, in the case of computation using a deep neural network, the weight computation/environment prediction unit 31 uses the lower layers of the network in common, and only the upper layers are specialized for computing the weights and the predicted environment information. In other words, the weight computation/environment prediction unit 31 performs what is called multi-task learning. With this arrangement, the weight computation parameters and the environment prediction parameters have a portion shared in common.
The prediction loss computation unit 32 calculates the difference between the shooting environment information and the predicted environment computed by the weight computation/environment prediction unit 31, and outputs the difference to the parameter correction unit 16 as a prediction loss. The parameter correction unit 16 corrects the parameters of the network in the weight computation/environment prediction unit 31 so as to reduce the loss computed by the loss computation unit 17 and the prediction loss computed by the prediction loss computation unit 32.
In the third example embodiment, since a portion of the network is shared between the computation of the weights and the computation of the predicted environment information in the weight computation/environment prediction unit 31, models of similar shooting environments tend to have similar weights. As a result, an effect of making the learning in the weight computation/environment prediction unit 31 more consistent is obtained.
Note that in the third example embodiment described above, the weight computation/environment prediction unit 31 and the parameter correction unit 16 compute equal weights with respect to the entire image, similarly to the first example embodiment. Instead, the weight computation/environment prediction unit 31 and the parameter correction unit 16 may be configured to compute weights for each anchor (each partial region) like the second example embodiment.
Next, operations by the object detection device 30 for learning will be described.
In
(Functional Configuration for Inference)
Next, a configuration of an object detection device for inference according to the third example embodiment will be described.
During inference, the object detection device 30x according to the third example embodiment executes processing basically similar to the learning processing according to the first example embodiment illustrated in
(Modifications)
The modifications (1) to (5) of the first example embodiment described above can also be applied to the third example embodiment.
Next, a fourth example embodiment of the present invention will be described.
The object detection device 40 for learning is provided with a plurality of object detection units 41, a weight computation unit 42, a merging unit 43, a loss computation unit 44, and a parameter correction unit 45. Image data including ground truth labels is prepared as image data for learning. The plurality of object detection units 41 output a score indicating the probability that a predetermined object exists for each partial region set with respect to the inputted image data. On the basis of the image data, the weight computation unit 42 uses weight computation parameters to compute weights to be used when the scores outputted by the plurality of object detection units 41 are merged. The merging unit 43 merges the scores outputted by the plurality of object detection units 41 for each partial region according to the weights computed by the weight computation unit 42. The loss computation unit 44 computes the difference between the ground truth labels of the image data and the scores merged by the merging unit 43 as a loss. Then, the parameter correction unit 45 corrects the weight computation parameters so as to reduce the computed loss.
A part or all of the example embodiments described above may also be described as the following supplementary notes, but not limited thereto.
(Supplementary Note 1)
An object detection device comprising:
a plurality of object detection units configured to output a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data;
a weight computation unit configured to use weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data, the weight being used when the scores outputted by the plurality of object detection units are merged;
a merging unit configured to merge the scores outputted by the plurality of object detection units for each partial region according to the weight computed by the weight computation unit;
a loss computation unit configured to compute a difference between a ground truth label of the image data and the score merged by the merging unit as a loss; and
a parameter correction unit configured to correct the weight computation parameters so as to reduce the loss.
(Supplementary note 2)
The object detection device according to supplementary note 1,
wherein the weight computation unit is configured to compute a single weight with respect to the image data as a whole, and
wherein the merging unit is configured to merge the scores outputted by the plurality of object detection units according to the single weight.
(Supplementary note 3)
The object detection device according to supplementary note 1,
wherein the weight computation unit is configured to compute the weight for each partial region of the image data, and
wherein the merging unit is configured to merge the scores outputted by the plurality of object detection units according to the weight computed for each partial region.
(Supplementary note 4)
The object detection device according to supplementary note 1,
wherein the weight computation unit is configured to compute the weight for each class indicating the object, and
wherein the merging unit is configured to merge the scores outputted by the plurality of object detection units according to the weight computed for each class.
(Supplementary note 5)
The object detection device according to any one of supplementary notes 1 to 4, wherein the merging unit is configured to multiply the scores outputted by the plurality of object detection units by the weight for each object detection unit computed by the weight computation unit, add the multiplied scores together, and calculate an average value.
(Supplementary note 6)
The object detection device according to any one of supplementary notes 1 to 4,
wherein the plurality of object detection units are each configured to output coordinate information about a rectangular region where the object exists for each partial region,
wherein the merging unit is configured to merge the coordinate information about the rectangular region where the object exists according to the weight computed by the weight computation unit, and
wherein the loss computation unit is configured to compute a loss including a difference between the ground truth label and the coordinate information merged by the merging unit.
(Supplementary note 7)
The object detection device according to supplementary note 6, wherein the merging unit is configured to multiply the coordinate information outputted by the plurality of object detection units by the weight for each object detection unit computed by the weight computation unit, add the multiplied scores together, and calculate an average value.
(Supplementary note 8)
The object detection device according to any one of supplementary notes 1 to 7,
wherein the weight computation unit is configured to use shooting environment prediction parameters to predict a shooting environment of the image data, and output predicted environment information,
wherein the object detection device further comprises a prediction loss computation unit configured to compute a shooting environment prediction loss on a basis of shooting environment information about the image data prepared in advance and the predicted environment information, and
wherein the parameter correction unit is configured to correct the shooting environment prediction parameters so as to reduce the prediction loss.
(Supplementary note 9)
The object detection device according to supplementary note 8, wherein the weight computation unit is provided with a first network including the weight computation parameters and a second network including the shooting environment prediction parameters, and
wherein the first network and the second network have a portion shared in common.
(Supplementary note 10)
An object detection device learning method comprising:
outputting, from a plurality of object detection units, a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data;
using weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data, the weight being used when the scores outputted by the plurality of object detection units are merged;
merging the scores outputted by the plurality of object detection units for each partial region according to the computed weight;
computing a difference between a ground truth label of the image data and the merged score as a loss; and
correcting the weight computation parameters so as to reduce the loss.
(Supplementary note 11)
A recording medium storing a program causing a computer to execute an object detection device learning process comprising:
outputting, from a plurality of object detection units, a score indicating a probability that a predetermined object exists for each partial region set with respect to inputted image data;
using weight computation parameters to compute a weight for each of the plurality of object detection units on a basis of the image data, the weight being used when the scores outputted by the plurality of object detection units are merged;
merging the scores outputted by the plurality of object detection units for each partial region according to the computed weight;
computing a difference between a ground truth label of the image data and the merged score as a loss; and
correcting the weight computation parameters so as to reduce the loss.
The foregoing describes the present invention with reference to example embodiments and examples, but the present invention is not limited to the above example embodiments and examples. The configuration and details of the present invention may be subjected to various modifications that would occur to persons skilled in the art within the scope of the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/027481 | 7/11/2019 | WO |