The present invention relates to a learning device, an image processing device, a learning method, an image processing method, a learning program, and an image processing program.
In order to understand the damage caused by disasters, such as floods, forest fires, volcanic eruptions, earthquakes, tsunamis, droughts, and the like, or urban development, a change detection technique for detecting an area in which the condition of the ground surface has changed, on the basis of an image photographed from a high place, such as a satellite image, has been developed.
Examples of the above change detection technique are disclosed in Non Patent Literature(NPL) 1 and NPL 2. NPL 1 discloses a technique for individually correcting a photographed image as a preprocess. In addition, NPL 2 discloses a technique for masking (hiding), among detected areas in which the condition of the ground surface has changed, an area in which a change other than a change of a detection target has occurred.
In addition, NPL 3 discloses a method of computing a component of the sunlight spectrum from the solar zenith angle.
In addition, as networks usable for machine learning, a convolutional neural network (CNN) is disclosed in NPL 4, a sparse auto encoder (SAE) is disclosed in NPL 5, and a deep belief network (DBN) is disclosed in NPL 6.
NPL 1: R. Richter, and A. Muller, “De-shadowing of satellite/airborne imagery,” Intl. Journal of Remote Sens., Vol. 26, No. 15, Taylor & Francis, pp. 3137-3148, August 2005.
NPL 2: L. Bruzzone and F. Bovolo, “A Novel Framework for the Design of Change-Detection Systems for Very-High-Resolution Remote Sensing Images,” Proc. IEEE, Vol. 101, No. 3, pp. 609-630, March 2013.
NPL 3: Richard E. Bird and Carol Riordan, “Simple Solar Spectral Model for Direct and Diffuse Irradiance on Horizontal and Tilted Planes at the Earth's Surface for Cloudless Atmospheres,” Journal of climate and applied meteorology, American Meteorological Society, pp. 87-97, January 1986.
NPL 4: A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Proc. Adv. Neural Inf. Process. Syst., pp. 1097-1105, 2012.
NPL 5: F. Zhang, B. Du, and L. Zhang, “Saliency-Guided Unsupervised Feature Learning for Scene Classification,” IEEE Trans. Geosci. Remote Sens., Vol. 53, No. 4, pp. 2175-2184, April 2015.
NPL 6: G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput., Vol. 18, No. 7, pp. 1527-1554, 2006.
However, the above change detection technologies have a problem that changes of non-detection targets that are not related to damage or urban development, for example, changes due to sunshine conditions such as the presence/absence of shadow, changes of atmospheric conditions such as clouds and fog, and seasonal changes of plants is detected together with a change of a detection target.
The above problem will be described with reference to
As shown in the upper of
As shown in the upper of
Of the above differences, the only difference of the detection target is “there are buildings”. However, if no settings for detecting changes are made, the change detection means 99 reflects all the differences between the image and the image It in the change map.
In the change map shown in
As described above, the change of the position of the shadow, the seasonal change of plants, and the change of clouds are unnecessary changes that should not be reflected in the change map. The lower of
In the ideal change map shown in the lower of
As described above, a technique for detecting, from a plurality of images with different photographing times, a change only of a detection target without detecting changes of non-detection targets, such as changes due to sunshine conditions, changes of atmospheric conditions, seasonal changes of forest, and the like is desired. NPL 1 to NPL 6 do not disclose techniques capable of detecting a change only of a detection target.
In view of the above, a purpose of the present invention is to provide a learning device, an image processing device, a learning method, an image processing method, a learning program, and an image processing program that solve the above problem and are capable of detecting, among changes between a plurality of images with different photographing times, a change only of a detection target.
A learning device according to the present invention includes a learning means that, by using learning data including at least a set of image areas representing a periodic change of a predetermined object among changes between a plurality of images and a set of image areas representing a change other than the periodic change among the changes between the plurality of images, causes a detector to learn a process for detecting a change other than the periodic change among the changes between the plurality of images.
An image processing device according to the present invention includes a first generation means that generates change information indicating, for each pixel constituting an image, a plurality of feature values indicating the degree of a change in which a periodic change of a predetermined object is removed from changes between a plurality of images, and reliability information indicating, for each pixel, reliability of each of the plurality of feature values, an extraction means that extracts, from the plurality of images, an area including a pixel corresponding to a feature value whose reliability indicated by the generated reliability information is equal to or greater than a predetermined value and extracts, from the generated change information, a feature value equal to or greater than the predetermined value, and a second generation means that generates learning data including each extracted area, the extracted feature value equal to or greater than the predetermined value, and data indicating a photographing condition of each of the plurality of images associated with each other.
An image processing device according to the present invention includes a parameter computation means that computes, on the basis of data indicating a photographing condition of each of a plurality of images, a parameter representing a periodic change of a predetermined object displayed in the plurality of images, a feature-value computation means that computes, using the computed parameter and the plurality of images, a feature value indicating the degree of a change in which the periodic change is removed from changes between the plurality of images, and a reliability computation means that computes reliability of the computed feature value.
A learning method according to the present invention includes causing, by using learning data including at least a set of image areas representing a periodic change of a predetermined object among changes between a plurality of images and a set of image areas representing a change other than the periodic change among the changes between the plurality of images, a detector to learn a process for detecting a change other than the periodic change among the changes between the plurality of images.
An image processing method according to the present invention includes generating change information indicating, for each pixel constituting an image, a plurality of feature values indicating the degree of a change in which a periodic change of a predetermined object is removed from changes between a plurality of images, and reliability information indicating, for each pixel, reliability of each of the plurality of feature values, extracting, from the plurality of images, an area including a pixel corresponding to a feature value whose reliability indicated by the generated reliability information is equal to or greater than a predetermined value, extracting, from the generated change information, a feature value equal to or greater than the predetermined value, and generating learning data including each extracted area, the extracted feature value equal to or greater than the predetermined value, and data indicating a photographing condition of each of the plurality of images associated with each other.
An image processing method according to the present invention includes computing, on the basis of data indicating a photographing condition of each of a plurality of images, a parameter representing a periodic change of a predetermined object displayed in the plurality of images, computing, using the computed parameter and the plurality of images, a feature value indicating the degree of a change in which the periodic change is removed from changes between the plurality of images, and computing reliability of the computed feature value.
A learning program according to the present invention, the program causes a computer to execute a learning process of causing, by using learning data including at least a set of image areas representing a periodic change of a predetermined object among changes between a plurality of images and a set of image areas representing a change other than the periodic change among the changes between the plurality of images, a detector to learn a process for detecting a change other than the periodic change among the changes between the plurality of images.
An image processing program according to the present invention, the program causes a computer to execute a first generation process of generating change information indicating, for each pixel constituting an image, a plurality of feature values indicating the degree of a change in which a periodic change of a predetermined object is removed from changes between a plurality of images, and reliability information indicating, for each pixel, reliability of each of the plurality of feature values, a first extraction process of extracting, from the plurality of images, an area including a pixel corresponding to a feature value whose reliability indicated by the generated reliability information is equal to or greater than a predetermined value, a second extraction process of extracting, from the generated change information, a feature value equal to or greater than the predetermined value, and a second generation process of generating learning data including each extracted area, the extracted feature value equal to or greater than the predetermined value, and data indicating a photographing condition of each of the plurality of images associated with each other.
An image processing program according to the present invention, the program causes a computer to execute a first computation process of computing, on the basis of data indicating a photographing condition of each of a plurality of images, a parameter representing a periodic change of a predetermined object displayed in the plurality of images, a second computation process of computing, using the computed parameter and the plurality of images, a feature value indicating the degree of a change in which the periodic change is removed from changes between the plurality of images, and a third computation process of computing reliability of the computed feature value.
According to the present invention, it is possible to detect, among changes between a plurality of images with different photographing times, a change only of a detection target.
First, the reason why it is difficult for the technique disclosed in each of NPL 1 and NPL 2 to detect a change only of a detection target will be described with reference to the drawings.
The first correction means 911 has a function of correcting a shadow in an input observation image. The second correction means 912 has a function of correcting a shadow in an input reference image. The first correction means 911 and the second correction means 912 each correct a shadow in such a manner as to satisfy a hypothetical condition of “the reflectance of the shadow is 0, and there is no water area”.
The feature-value computation means 913 has a function of computing a feature value of a change. The feature value indicates the degree of a change between an observation image with corrected shadow and a reference image with corrected shadow. The change-pixel detection means 914 has a function of detecting a change pixel on the basis of the computed feature value of a change to generate a change map on the basis of the detected change pixel.
As shown in
In addition, as shown in
The feature-value computation means 913 computes a feature value of a change between the image with the correction error and the image It with the correction error. The change-pixel detection means 914 detects a change pixel on the basis of the computed feature value of the change to generate a change map on the basis of the detected change pixel.
In the change map generated through a change detection process by the image processing device 910, unnecessary changes, such as the change of the position of the shadow, the seasonal change of the plant, and the change of the cloud, caused by the correction error are reflected as shown in
As described above, the image processing device 910 has a problem of limited conditions that can be satisfied without causing a correction error in a correction process. Furthermore, some conditions cannot be satisfied in a correction process, which is another problem that each correction means of the image processing device 910 cannot always correct shadows properly.
The feature-value computation means 921 has a function of computing a feature value of a change between an observation image and a reference image. The change-pixel detection means 922 has a function of detecting a change pixel on the basis of the computed feature value of the change to generate a first change map on the basis of the detected change pixel.
The unnecessary-change-area detection means 923 has a function of detecting, as an unnecessary change area, an area in which a change of non-detection targets has occurred between the observation image and the reference image. The unnecessary-change-area detection means 923 generates an unnecessary-change map representing the detected unnecessary change area. The unnecessary-change removal means 924 has a function of detecting the difference between the first change map and the unnecessary-change map to generate a second change map.
The feature-value computation means 921 computes a feature value of a change between the image and the image It. The change-pixel detection means 922 detects a change pixel on the basis of the computed feature value of the change to generate a first change map on the basis of the detected change pixel.
As shown in
In addition, as shown in
The unnecessary-change removal means 924 performs an unnecessary change removal process to generate a second change map by subtracting the unnecessary-change map from the first change map.
Theoretically, a change only of the detection target is to be reflected in the second change map generated after the unnecessary change removal process by the unnecessary-change removal means 924. However, as shown in
This is because an algorithm that simply removes all the areas in which a change of a shadow occurs is applied to the image processing device 920. That is, the image processing device 920 has a problem that a change of a shadow cannot be detected.
As described above, it is difficult for the technique disclosed in each of NPL 1 and NPL 2 to detect a change only of a detection target. For the above reason, the present invention is to provide a learning device and an image processing device that cause a detector to detect a change only of a detection target with high accuracy and also to detect a change of a shadow.
[Description of Configuration]
Hereinafter, exemplary embodiments of the present invention will be described with reference to the drawings.
An image processing device 100 according to the present exemplary embodiment detects a change between images photographed at two different times and a change between metadata of the images. After detecting the change, the image processing device 100 generates a change map and a reliability map indicating the degree of reliability of each pixel.
Then, the image processing device 100 extracts, on the basis of the generated reliability map, an area corresponding to the periphery of a reliable pixel from each of the two images and the change map and combines the extracted areas with the metadata to generate a data set. The generated data set is used for learning to detect a change only of a detection target.
As shown in
The satellite image DB 110 stores a reference image photographed by an artificial satellite and metadata of the reference image. The satellite image DB 110 outputs an image photographed at a reference time and the metadata of the image photographed at the reference time.
The earth observation means 120 has a function of photographing the condition of the ground surface of an observation target. The earth observation means 120 outputs an image photographed at an arbitrary time and the metadata of the image photographed at the arbitrary time.
The metadata of an image indicates the photographing condition when the image is photographed. The metadata of the image includes, for example, data indicating the position of the artificial satellite at the photographing time and data indicating the direction of the antenna used for photographing.
The change detection means 130 has a function of generating a change map and a reliability map on the basis of the image photographed at the reference time, the metadata of the image photographed at the reference time, the image photographed at the arbitrary time, and the metadata of the image photographed at the arbitrary time.
For example, the change detection means 130 limits using model parameters, the range of the spectrum that changes in accordance with conditions causing unnecessary changes. The model parameters, which will be described later, are computed from the metadata indicating the solar zenith angle, the date and time, and the like.
The unnecessary changes include changes due to sunshine conditions, changes of atmospheric conditions, seasonal changes of forests, and the like as described above. That is, it can be said that the unnecessary changes in the present exemplary embodiment are periodic changes in accordance with the photographing environment.
By limiting the range of the spectrum, the change detection means 130 computes a feature value of a change indicating the degree of a change with no unnecessary changes. Then, the change detection means 130 detects a change pixel on the basis of the computed feature value of the change. The change detection means 130 classifies the detected change pixel and also computes the reliability of the detection for each pixel.
The metadata extraction means 140 has a function of extracting metadata required for a data set from the metadata of the image photographed at the reference time and the metadata of the image photographed at the arbitrary time.
The data-set generation means 150 has a function of generating a data set to be used for learning, on the basis of the generated change map and reliability map, the image photographed at the reference time, and the image photographed at the arbitrary time.
The model-parameter computation means 131 has a function of computing a model parameter at the arbitrary time on the basis of the metadata of the image photographed at the arbitrary time and computing a model parameter at the reference time on the basis of the metadata of the image photographed at the reference time.
The model parameters in the present exemplary embodiment are environment data indicating the state of a periodic change at a photographing time and data about an object. That is, the model-parameter computation means 131 computes a model parameter representing the state of a periodic change on the basis of the metadata of an image.
The feature-value computation means 132 has a function of computing a feature value of the change with no unnecessary changes, on the basis of the image photographed at the reference time, the image photographed at the arbitrary time, and the computed model parameters.
The change-pixel detection means 133 has a function of generating a change map on the basis of the computed feature value of the change with no unnecessary changes. The reliability computation means 134 has a function of generating a reliability map on the basis of the computed feature value of the change with no unnecessary changes.
In the following, an example in which the image processing device 100 generates a data set will be described with reference to the drawings.
As shown in
The model-parameter computation means 131 computes a model parameter at the time (t-1) on the basis of the metadata of the image The model-parameter computation means 131 further computes a model parameter at the time t on the basis of the metadata of the image It.
An example of computing model parameters by the model-parameter computation means 131 is described below. The model-parameter computation means 131 uses, for example, the solar zenith angle θ indicated by the metadata and the latitude and longitude of the point indicated by the image as input to compute, in accordance with a radiation transmission model of the atmosphere, a direct light component of the sunlight spectrum (hereinafter, also referred to as a direct component) sd and a scattered light component (hereinafter, also referred to as a scattered component) ss as follows.
[sd, t, ss, t]=fBird(θt), [sd, t-1, ss, t-1]=fBird(θt-1) Expression (1)
The subscript tin Expression (1) indicates that the data is at the time t. Similarly, the subscript t-1 indicates that the data is at the time (t-1). The function fBird in Expression (1) is the function disclosed in NPL 3. In addition, the direct component sd and the scattering component ss are vectors.
The computed direct component sd and scattering component ss of the sunlight spectrum represent the state of sunlight at the photographing time. In addition, the direct component sd and scattering component ss of the sunlight spectrum suggest how the image changes due to shadows.
The model-parameter computation means 131 may further compute the solar zenith angle θ from, for example, the date and time, indicated by the metadata, when the image was photographed and from the latitude and longitude of the point indicated by the image. The model-parameter computation means 131 may further compute the solar azimuth angle together with the solar zenith angle θ.
The model-parameter computation means 131 may further compute, for example, the zenith angle of the artificial satellite having photographed the image. The model-parameter computation means 131 may further compute the azimuth angle of the artificial satellite together with the zenith angle of the artificial satellite.
The model-parameter computation means 131 may further use, for example, the date and time, indicated by the metadata, when the image was photographed and the latitude and longitude of the point indicated by the image as input to compute, in accordance with a model of a seasonal change of plants, the spectrum of vegetation in the season when the image was photographed. The model-parameter computation means 131 may further compute, together with the spectrum of vegetation, a normalized difference vegetation index (NDVI), which is a kind of vegetation index, and the CO2 absorption amount.
Each computed information represents the state of vegetation at the photographing time. In addition, each computed information suggests how the forest changes seasonally. The model-parameter computation means 131 may compute each information for each pixel together with a map showing the plant community.
The model-parameter computation means 131 may further use, for example, the solar azimuth angle and observation azimuth angle indicated by the metadata as input to compute the solar azimuth angle relative to the image in accordance with a geometric model.
The solar azimuth angle relative to the image is information indicating the direction in which a shadow is formed at the photographing time. The model-parameter computation means 131 may use the solar azimuth angle relative to the image and the solar zenith angle as information suggesting the direction in which a shadow is formed and the length of the shadow.
The upper of
Alternatively, the model-parameter computation means 131 may directly compute a vector representing the intensity of each wavelength instead of the vector representing the state of a component of the sunlight spectrum.
The middle of
The lower of
As described above, the model-parameter computation means 131 computes, on the basis of the data indicating the photographing condition of each of a plurality of images, a parameter representing a periodic change of a predetermined object displayed in the plurality of images. The model-parameter computation means 131 inputs the computed model parameter at the time (t-1) and model parameter at the time t to the feature-value computation means 132.
The feature-value computation means 132 computes, on the basis of the image It-1, the image It, and the computed model parameter at the time (t-1) and model parameter at the time t, a feature value of a change with no unnecessary changes.
The feature-value computation means 132 computes, for each pixel, a feature value of a change with no unnecessary changes on the basis of, for example, a physical model. Then, the feature-value computation means 132 generates a change map indicating the feature value of the change with no unnecessary changes.
Regarding the areas of the change map shown in
The white dots encircled by the broken-line ellipse in the change map shown in
In the following, a computation example of a feature value of a change with no unnecessary changes will be described.
As shown in
The shortest distance from the origin to the change vector c is computed by the Expression shown in
As described above, the feature-value computation means 132 is capable of computing, using the model parameters computed by the model-parameter computation means 131 and a plurality of images, a feature value indicating the degree of a change in which a periodic change (for example, a change of the position of a shadow) is removed from changes between the plurality of images. Note that, the feature-value computation means 132 may compute a feature value of a change with no unnecessary changes by a method other than the method shown in
The change-pixel detection means 133 generates a change map by reflecting only a feature value of a change equal to or greater than a predetermined threshold among input feature values of changes. For example, in the change map shown in
The reliability computation means 134 generates a reliability map by reflecting only the feature value of the change equal to or greater than the predetermined threshold among the input feature values of the changes. The reliability computation means 134 may further generate a reliability map by reflecting only a feature value of a change in which dispersion is equal to or less than a predetermined threshold among the input feature values of the changes. That is, the reliability computation means 134 computes the reliability of the feature value computed by the feature-value computation means 132.
In the reliability map shown in
In the example shown in
Note that, when the extracted value indicates “with no change”, the presence/absence of a change is represented by a black rectangle. Alternatively, instead of the presence/absence of a change, the value of the change map itself may be included in the data.
The data-set generation means 150 further extracts the area encircled by the broken-line rectangle in the image It-1 in association with the area encircled by the broken-line rectangle in the image It. Note that, the data-set generation means 150 may extract the center pixel of the rectangle instead of the area encircled by the rectangle.
In addition, the metadata extraction means 140 extracts the metadata about the extracted area of the image It-1 and the metadata about the extracted area of the image It. The data-set generation means 150 generates each data in the data set shown in
The change detection means 130 in the present exemplary embodiment generates change information (for example, a change map) indicating, for each pixel constituting an image, a plurality of feature values indicating the degree of a change in which a periodic change of a predetermined object is removed from changes between a plurality of images and reliability information (for example, a reliability map) indicating, for each pixel, reliability of each of the plurality of feature values.
Then, the data-set generation means 150 in the present exemplary embodiment extracts, from the plurality of images, an area including a pixel corresponding to a feature value whose reliability indicated by the generated reliability information is equal to or greater than a predetermined value and extracts, from the generated change information, a feature value equal to or greater than the predetermined value. The data-set generation means 150 further generates learning data including each extracted area, the extracted feature value equal or greater than the predetermined value, and data indicating a photographing condition of each of the plurality of images associated with each other.
[Description of Operation]
Hereinafter, the operation of generating a change map and a reliability map by the image processing device 100 according to the present exemplary embodiment will be described with reference to
First, the earth observation means 120 inputs an image photographed at an arbitrary time and the metadata of the image photographed at the arbitrary time to the change detection means 130 (step S101).
Then, the satellite image DB 110 inputs an image photographed at a reference time and the metadata of the image photographed at the reference time to the change detection means 130 (step S102).
Then, the model-parameter computation means 131 computes a model parameter at the arbitrary time on the basis of the metadata of the image photographed at the arbitrary time. The model-parameter computation means 131 further computes a model parameter at the reference time on the basis of the metadata of the image photographed at the reference time (step S103). The model-parameter computation means 131 inputs the computed model parameters to the feature-value computation means 132.
Then, the feature-value computation means 132 computes a feature value of a change with no unnecessary changes using the image photographed at the reference time, the image photographed at the arbitrary time, and the model parameters computed in step S103 (step S104). The feature-value computation means 132 inputs the computed feature value to the change-pixel detection means 133 and the reliability computation means 134.
Then, the change-pixel detection means 133 generates a change map representing the presence/absence of a change for each pixel using the computed feature value of the change with no unnecessary changes (step S105).
Then, the reliability computation means 134 generates a reliability map representing the reliability of the change map generated in step S105 for each pixel using the computed feature value of the change with no unnecessary changes (step S106). After generating the reliability map, the image processing device 100 terminates the change map and reliability map generation process.
Next, the operation of generating a data set by the image processing device 100 according to the present exemplary embodiment will be described with reference to
First, the earth observation means 120 inputs the image photographed at the arbitrary time to the data-set generation means 150. The earth observation means 120 further inputs the metadata of the image photographed at the arbitrary time to the metadata extraction means 140 (step S111).
Then, the satellite image DB 110 inputs the image photographed at the reference time to the data-set generation means 150. The satellite image DB 110 further inputs the metadata of the image photographed at the reference time to the metadata extraction means 140 (step S112).
Then, the change detection means 130 inputs the generated change map and reliability map to the data-set generation means 150 (step S113).
Then, the data-set generation means 150 extracts an area corresponding to the periphery of each reliable pixel in the reliability map from each of the image photographed at the reference time, the image photographed at the arbitrary time, and the change map (step S114). The data-set generation means 150 inputs each extracted area to the metadata extraction means 140.
Then, the metadata extraction means 140 extracts metadata about each area extracted in step S114 from the metadata of the image photographed at the reference time and the metadata of the image photographed at the arbitrary time (step S115). The metadata extraction means 140 inputs each extracted metadata to the data-set generation means 150.
Then, the data-set generation means 150 generates a data set constituted by data in which each extracted image area, each extracted metadata, and the presence/absence of the change corresponding to the value of the extracted area of the change map are associated with each other (step S116). After generating the data set, the image processing device 100 terminates the data set generation process.
[Description of Effects]
The image processing device 100 according to the present exemplary embodiment includes the change detection means 130 that detects a change from images photographed at two different times and the metadata of each of the images and generates a change map and a reliability map indicating the degree of reliability for each pixel.
The image processing device 100 further includes the data-set generation means 150 that extracts an area corresponding to the periphery of a reliable pixel in the reliability map from each of the images photographed at the two different times and the change map and combines them with the metadata to generate a data set.
The change detection means 130 includes the feature-value computation means 132 that computes a feature value of a change with no unnecessary changes by limiting the range of the spectrum that changes in accordance with the conditions causing unnecessary changes using model parameters computed from the metadata about the solar zenith angle, the date and time, and the like. The unnecessary changes include changes due to sunshine conditions, changes of atmospheric conditions, and seasonal changes of forests.
The change detection means 130 further includes the change-pixel detection means 133 that detects a change pixel on the basis of the computed feature value of the change and classifies the detected change pixel, and the reliability computation means 134 that computes the reliability of the detection for each pixel.
Thus, the image processing device 100 according to the present exemplary embodiment is capable of generating a data set required for learning a process of detecting a change only of a detection target without detecting unnecessary changes.
[Description of Configuration]
Next, a learning device according to a second exemplary embodiment of the present invention will be described with reference to the drawings.
A learning device 200 according to the present exemplary embodiment causes a device to learn a process of detecting only a change other than unnecessary changes using a data set constituted by a large number of data including a set of image areas photographed at the same point at two different times, the presence/absence of a change in the image areas, and metadata about each image area.
That is, a change detector that has learned the process of detecting only a change other than unnecessary changes does not detect the unnecessary changes. The unnecessary changes include changes due to sunshine conditions, changes of atmospheric conditions, and seasonal changes of forests.
As shown in
In addition, the learning device 200 is communicably connected to a change detector 300 as shown in
The model-parameter computation means 210 has a function of computing a model parameter at an arbitrary time on the basis of the metadata of the image photographed at the arbitrary time in the data set and computing a model parameter at a reference time on the basis of the metadata of the image photographed at the reference time in the data set. The function of the model-parameter computation means 210 is similar to the function of the model-parameter computation means 131 in the first exemplary embodiment.
The machine learning means 220 has a function of causing a device to learn a process of detecting only a change other than unnecessary changes using a set of image areas photographed at the same point at two different times, the presence/absence of a change in the image areas, and the model parameter about each image area.
Hereinafter, an example in which the learning device 200 causes the change detector 300 to learn a change detection process will be described with reference to the drawings.
The data set shown in
The machine learning means 220 having received the input data set including the model parameters causes the change detector 300 to learn a process of detecting only a change other than unnecessary changes.
In the example shown in
In addition, the model parameter at the time (t-1) and the model parameter at the time t may be vectors representing the state of the direct light component sd and the state of the scattered light component ss of the sunlight spectrum shown in the upper of
The model parameter at the time (t-1) and the model parameter at the time t may be the vectors representing the state of the NDVI of the plant shown in the middle of
In addition, the network constituting the change detector 300 may be any network as long as it is usable for machine learning such as the CNN disclosed in NPL 4, the SAE disclosed in NPL 5, the DBN disclosed in NPL 6, or the like.
The change detector 300 having learned the process of detecting only a change other than unnecessary changes detects a change only of a detection target without detecting unnecessary changes from the images photographed at the same point at two different times.
The machine learning means 220 in the present exemplary embodiment causes a detector to learn, using learning data including at least a set of image areas representing a periodic change of a predetermined object among changes between a plurality of images and a set of image areas representing a change other than the periodic change among the changes between the plurality of images, a process of detecting a change other than the periodic change among the changes between the plurality of images.
The model-parameter computation means 210 in the present exemplary embodiment computes a parameter representing the periodic change of the predetermined object on the basis of data indicating photographing conditions of the image areas included in the learning data. The machine learning means 220 causes the detector to learn using the computed parameters and the learning data.
The advantage of machine learning using the model parameters is that machine learning is facilitated. For example, when machine learning is performed with a data set with no model parameters, the data set is required to contain data related to many pattern changes. However, it is difficult to prepare a data set constituted by various types of data.
When machine learning is performed with model parameters, the learning device 200 causes the change detector 300 to refer to data about similar changes on the basis of the model parameters in order to analogize the pattern of a change although the data set does not include the pattern of the change. That is, it is possible for the user to reduce the types of data included in the data set.
[Description of Operation]
Hereinafter, the operation of the learning device 200 according to the present exemplary embodiment causing the change detector 300 to learn the change detection process will be described with reference to
First, the image processing device 100 inputs the generated data set to the learning device 200 (step S201).
Then, the model-parameter computation means 210 computes a model parameter at the arbitrary time on the basis of the metadata of the image photographed at the arbitrary time. The model-parameter computation means 210 further computes a model parameter at the reference time on the basis of the metadata of the image photographed at the reference time (step S202). The model-parameter computation means 210 inputs a data set including the computed model parameters to the machine learning means 220.
Then, the machine learning means 220 causes the change detector 300 to learn a process of detecting a change only of the detection target without detecting unnecessary changes using the input data set (step S203).
Specifically, the machine learning means 220 causes the change detector 300 to learn a process of detecting a change only of the detection target without detecting unnecessary changes, such as a change of the position of a shadow, a change of the state of clouds, a seasonal change of plants, and the like, using the data set. After the learning, the learning device 200 terminates the learning process.
[Description of Effects]
The learning device 200 according to the present exemplary embodiment includes the machine learning means 220 that causes a device to learn a process of detecting a change only of a detection target without detecting unnecessary changes using data including a set of image areas photographed at the same point at two different times, the presence/absence of a change in the image areas, and model parameters at the observation time of each image area. The unnecessary changes include changes due to sunshine conditions, changes of atmospheric conditions, and seasonal changes of forests.
Thus, the learning device 200 according to the present exemplary embodiment is capable of causing the change detector 300 to learn a process of detecting a change only of the detection target without detecting unnecessary changes. The change detector 300 having learned is capable of detecting a change only of the detection target among changes between a plurality of images with different photographing times.
Note that, the image processing device 100 according to the first exemplary embodiment and the learning device 200 according to the second exemplary embodiment may be used independently or may be used in the same system.
Hereinafter, a specific example of a hardware configuration of the image processing device 100 according to the first exemplary embodiment and a specific example of a hardware configuration of the learning device 200 according to the second exemplary embodiment will be described.
Each of the main storage unit 102 and the main storage unit 202 is used as a work region of data and a temporary save region of data. Each of the main storage unit 102 and the main storage unit 202 is, for example, a random access memory (RAM).
Each of the communication unit 103 and the communication unit 203 has a function of inputting and outputting data to and from peripheral devices via a wired network or a wireless network (information communication network).
Each of the auxiliary storage unit 104 and the auxiliary storage unit 204 is a non-transitory tangible storage medium. The non-transitory tangible storage medium is, for example, a magnetic disk, a magneto-optical disk, a compact disk read only memory (CD-ROM), a digital versatile disk read only memory (DVD-ROM), or a semiconductor memory.
Each of the input unit 105 and the input unit 205 has a function of inputting data and processing instructions. Each of the input unit 105 and the input unit 205 is an input device, such as a keyboard or a mouse.
Each of the output unit 106 and the output unit 206 has a function of outputting data. Each of the output unit 106 and the output unit 206 is, for example, a display device, such as a liquid crystal display device, or a printing device, such as a printer.
In addition, as shown in
The auxiliary storage unit 104 stores, for example, a program for implementing the earth observation means 120, the change detection means 130, the metadata extraction means 140, and the data-set generation means 150 shown in
Note that, the image processing device 100 may be implemented by hardware. For example, the image processing device 100 may have a circuit including a hardware component such as a large scale integration (LSI) incorporating a program for implementing the functions as shown in
The image processing device 100 may be implemented by software by executing, by the CPU 101 shown in
In the case of being implemented by software, the CPU 101 loads the program stored in the auxiliary storage unit 104 in the main storage unit 102 and executes the program to control the operation of the image processing device 100, whereby the functions are implemented by software.
The auxiliary storage unit 204 stores, for example, a program for implementing the model-parameter computation means 210 and the machine learning means 220 shown in
Note that, the learning device 200 may be implemented by hardware. For example, the learning device 200 may have a circuit including a hardware component such as an LSI incorporating a program for implementing the functions as shown in
The learning device 200 may be implemented by software by executing, by the CPU 201 shown in
In the case of being implemented by software, the CPU 201 loads the program stored in the auxiliary storage unit 204 in the main storage unit 202 and executes the program to control the operation of the learning device 200, whereby the functions are implemented by software.
In addition, a part of or all of the constituent elements may be implemented by a general purpose circuitry, a dedicated circuitry, a processor, or the like, or a combination thereof These may be constituted by a single chip, or by a plurality of chips connected via a bus. A part of or all of the constituent elements may be implemented by a combination of the above circuitry or the like and a program.
In the case in which a part of or all of the constituent elements are implemented by a plurality of information processing devices, circuitries, or the like, the information processing devices, circuitries, or the like may be arranged in a concentrated manner, or dispersedly. For example, the information processing devices, circuitries, or the like may be implemented as a form in which each is connected via a communication network, such as a client-and-server system or a cloud computing system.
Next, an outline of the present invention will be described.
When a learning device having such a configuration is used, a change only of a detection target is detected among changes between a plurality of images with different photographing times.
The learning device 10 may further include a computation means (for example, the model-parameter computation means 210) that computes a parameter representing the periodic change of the predetermined object on the basis of data indicating photographing conditions of the image areas included in the learning data, and the learning means 11 may cause the detector to learn using the computed parameter and the learning data.
When a learning device having such a configuration is used, a periodic change of a predetermined object is expressed more concretely.
Alternatively, the parameter may be a solar zenith angle. Alternatively, the parameter may be a direct light component of the sunlight spectrum and a scattered light component of the sunlight spectrum.
When a learning device having such a configuration is used, a change of the length of a shadow is excluded from a detection target among changes between a plurality of images.
Alternatively, the parameter may be a vegetation index.
When a learning device having such a configuration is used, a seasonal change of plants is excluded from a detection target among changes between a plurality of images.
Alternatively, the parameter may be a solar azimuth angle.
When a learning device having such a configuration is used, a change of the direction in which a shadow is formed is excluded from a detection target among changes between a plurality of images.
When an image processing device having such a configuration is used, a change only of a detection target is detected among changes between a plurality of images with different photographing times.
When an image processing device having such a configuration is used, a change only of a detection target is detected among changes between a plurality of images with different photographing times.
The present invention has been described with reference to the exemplary embodiments and examples, but is not limited to the above exemplary embodiments and examples. Various changes that can be understood by those skilled in the art within the scope of the present invention can be made to the configurations and details of the present invention.
In addition, a part or all of the above exemplary embodiments can also be described as the following supplementary notes, but are not limited to the following.
A learning device including: a learning means configured to, by using learning data including at least a set of image areas representing a periodic change of a predetermined object among changes between a plurality of images and a set of image areas representing a change other than the periodic change among the changes between the plurality of images, cause a detector to learn a process for detecting a change other than the periodic change among the changes between the plurality of images.
The learning device according to supplementary note 1 further including: a computation means configured to compute a parameter representing the periodic change of the predetermined object on the basis of data indicating photographing conditions of the image areas included in the learning data, in which the learning means is configured to cause the detector to learn using the computed parameter and the learning data.
The learning device according to supplementary note 2, in which the parameter is a solar zenith angle.
The learning device according to supplementary note 2 or 3, in which the parameter is a direct light component of a sunlight spectrum and a scattered light component of the sunlight spectrum.
The learning device according to any one of supplementary notes 2 to 4, in which the parameter is a vegetation index.
The learning device according to any one of supplementary notes 2 to 5, in which the parameter is a solar azimuth angle.
An image processing device including: a first generation means configured to generate change information indicating, for each pixel constituting an image, a plurality of feature values indicating the degree of a change in which a periodic change of a predetermined object is removed from changes between a plurality of images, and reliability information indicating, for each pixel, reliability of each of the plurality of feature values; an extraction means configured to extract, from the plurality of images, an area including a pixel corresponding to a feature value whose reliability indicated by the generated reliability information is equal to or greater than a predetermined value and to extract, from the generated change information, a feature value equal to or greater than the predetermined value; and a second generation means configured to generate learning data including each extracted area, the extracted feature value equal to or greater than the predetermined value, and data indicating a photographing condition of each of the plurality of images associated with each other.
An image processing device including: a parameter computation means configured to compute, on the basis of data indicating a photographing condition of each of a plurality of images, a parameter representing a periodic change of a predetermined object displayed in the plurality of images; a feature-value computation means configured to compute, using the computed parameter and the plurality of images, a feature value indicating the degree of a change in which the periodic change is removed from changes between the plurality of images; and a reliability computation means configured to compute reliability of the computed feature value.
A learning method including: causing, by using learning data including at least a set of image areas representing a periodic change of a predetermined object among changes between a plurality of images and a set of image areas representing a change other than the periodic change among the changes between the plurality of images, a detector to learn a process for detecting a change other than the periodic change among the changes between the plurality of images.
An image processing method including: generating change information indicating, for each pixel constituting an image, a plurality of feature values indicating the degree of a change in which a periodic change of a predetermined object is removed from changes between a plurality of images, and reliability information indicating, for each pixel, reliability of each of the plurality of feature values; extracting, from the plurality of images, an area including a pixel corresponding to a feature value whose reliability indicated by the generated reliability information is equal to or greater than a predetermined value; extracting, from the generated change information, a feature value equal to or greater than the predetermined value; and generating learning data including each extracted area, the extracted feature value equal to or greater than the predetermined value, and data indicating a photographing condition of each of the plurality of images associated with each
An image processing method including: computing, on the basis of data indicating a photographing condition of each of a plurality of images, a parameter representing a periodic change of a predetermined object displayed in the plurality of images; computing, using the computed parameter and the plurality of images, a feature value indicating the degree of a change in which the periodic change is removed from changes between the plurality of images; and computing reliability of the computed feature value.
A learning program causing a computer to execute: a learning process causing, by using learning data including at least a set of image areas representing a periodic change of a predetermined object among changes between a plurality of images and a set of image areas representing a change other than the periodic change among the changes between the plurality of images, a detector to learn a process for detecting a change other than the periodic change among the changes between the plurality of images.
An image processing program causing a computer to execute: a first generation process of generating change information indicating, for each pixel constituting an image, a plurality of feature values indicating the degree of a change in which a periodic change of a predetermined object is removed from changes between a plurality of images, and reliability information indicating, for each pixel, reliability of each of the plurality of feature values; a first extraction process of extracting, from the plurality of images, an area including a pixel corresponding to a feature value whose reliability indicated by the generated reliability information is equal to or greater than a predetermined value; a second extraction process of extracting, from the generated change information, a feature value equal to or greater than the predetermined value; and a second generation process of generating learning data including each extracted area, the extracted feature value equal to or greater than the predetermined value, and data indicating a photographing condition of each of the plurality of images associated with each other.
An image processing program causing a computer to execute: a first computation process of computing, on the basis of data indicating a photographing condition of each of a plurality of images, a parameter representing a periodic change of a predetermined object displayed in the plurality of images; a second computation process of computing, using the computed parameter and the plurality of images, a feature value indicating the degree of a change in which the periodic change is removed from changes between the plurality of images; and a third computation process of computing reliability of the computed feature value.
10, 200 Learning device
11 Learning means
20, 30, 100, 910, 920 Image processing device
21 First generation means
22 Extraction means
23 Second generation means
31 Parameter computation means
32, 132, 913, 921 Feature-value computation means
33, 134 Reliability computation means
99, 130 Change detection means
101, 201 CPU
102, 202 Main storage unit
103, 203 Communication unit
104, 204 Auxiliary storage unit
105, 205 Input unit
106, 206 Output unit
107, 207 System bus
110 Satellite image database
120 Earth observation means
131, 210 Model-parameter computation means
133, 914, 922 Change-pixel detection means
140 Metadata extraction means
150 Data-set generation means
220 Machine learning means
300 Change detector
911 First correction means
912 Second correction means
923 Unnecessary-change-area detection means
924 Unnecessary-change removal means
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/037181 | 10/4/2018 | WO | 00 |