This invention relates to an image analysis device and an image analysis method applicable to an image analysis system that performs image analysis based on phases of received electromagnetic waves from a synthetic aperture radar.
Synthetic aperture radar (SAR) technology is a technology for obtaining an image equivalent to the image by an antenna having a large aperture, when a flying object such as artificial satellite, aircraft, or the like transmits and receives a radio wave while the flying object moves. The synthetic aperture radar is utilized, for example, for analyzing an elevation or a ground surface deformation by signal-processing reflected waves from the ground surface, etc. In particular, when accuracy is required, the analyzer takes time-series SAR images (SAR data) obtained by a synthetic aperture radar as input, and performs time-series analysis of the input SAR images.
Interferometric SAR analysis is an effective method for analyzing an elevation or a ground surface deformation. In the interferometric SAR analysis, the phase difference between radio signals of plural (for example, two) SAR images taken at different times is calculated. Based on the phase difference, a change in distance between the flying object and the ground that occurred during the shooting time period is detected.
Patent literature 1 describes an analysis method that uses a coherence matrix. A coherence matrix represents correlation of pixel values at the same position in multiple complex images.
The coherence is calculated by complex correlation of pixel values at the same position in plural SAR images among S (S≥2) SAR images. Suppose that (p, q) is a pair of SAR images and cp, q are components of the coherence matrix. Respective p and q are less than or equal to S and indicate one of the S SAR images. The phase θp, q (specifically, the phase difference) is calculated for each pair of SAR images. Then, an absolute value of the value obtained by averaging exp(−jθp, q) for a plurality of pixels in a predetermined area including pixels to be calculated as coherence is the component cp, q of the coherence matrix.
The magnitude of the variance of the phase θp, q can be determined from the absolute value of c, i.e., |cp, q|.
The coherence matrix includes information, such as variance, that allows the degree of phase noise to be estimated.
The fact that phase θp, q correlates with a displacement velocity and a shooting time difference is used for displacement analysis of the ground surface and other objects. For example, the displacement is estimated based on the average value of the phase difference. It is possible to verify the accuracy of the displacement analysis using the amount of phase noise. Thus, the coherence matrix can be used for the displacement analysis.
For elevation analysis, the fact that the phase θp, q correlates with an elevation of the object being analyzed and a distance between the flying objects (for example, the distance between two shooting positions of the flying objects) is used. For example, the elevation is estimated based on the average value of the phase difference. It is possible to verify the accuracy of the elevation analysis using the amount of phase noise. Thus, the coherence matrix can be used for the elevation analysis.
Patent literature 1 describes a method of fitting a model such as displacement to a coherence matrix and recovering the phase excluding the effect of noise. A similar method is also disclosed in non-patent literature 1.
According to the method described in patent literature 1 and the method described in non-patent literature 1, noise included in the phase difference can be reduced. However, when displacement analysis and elevation analysis are performed, since it is desirable that as little noise as possible be included in the phase difference, more noise is desired to be removed.
It is an object of the present invention to provide an image analysis device and an image analysis method that can achieve a greater degree of phase noise reduction.
An image analysis device according to the present invention includes pixel selection means for selecting multiple pixels at a plurality of positions in one image among multiple images in which the same area is recorded, dimension reduction means for compressing a complex vector as an evaluation value when an evaluation function is optimized into a low-dimensional space, expanding means for returning a compression result by the dimension reduction means to an original pixel space, and calculating a spatial correlation phase estimate; and optimization means for optimizing the evaluation function by bringing the evaluation value closer to the spatial correlation phase estimate and a pixel value at the position selected by the pixel selection means.
An image analysis method according to the present invention includes selecting multiple pixels at a plurality of positions in one image among multiple images in which the same area is recorded, compressing a complex vector as an evaluation value when an evaluation function is optimized into a low-dimensional space, returning a compressing result to an original pixel space, and calculating a spatial correlation phase estimate; and optimizing the evaluation function by bringing the evaluation value closer to the spatial correlation phase estimate and a pixel value at the selected position.
An image analysis program according to the present invention causes a computer to execute a process of selecting multiple pixels at a plurality of positions in one image among multiple images in which the same area is recorded, a process of compressing a complex vector as an evaluation value when an evaluation function is optimized into a low-dimensional space, a process of returning a compressing result to an original pixel space, and calculating a spatial correlation phase estimate, and a process of optimizing the evaluation function by bringing the evaluation value closer to the spatial correlation phase estimate and a pixel value at the selected position.
According to the present invention, the degree of phase noise reduction can be greater.
Hereinafter, example embodiments of the present invention will be explained with reference to the drawings.
In the following example embodiment, the image analysis device uses a first evaluation function (observed signal evaluation function) based on weighted observation signals (observed pixel values) and noise-free pixel values, and a second evaluation function (spatial correlation evaluation function) representing smooth distribution of phase differences in an image. The image analysis device calculates phase that is estimated to be noise-free using an evaluation function obtained by merging the observed signal evaluation function and the spatial correlation evaluation function. “Merging” is “sum”, for example. Hereinafter, when simply expressed as “evaluation function,” it means the evaluation function obtained by merging the observed signal evaluation function and the spatial correlation evaluation function.
The image analysis device of this example embodiment uses an inverse matrix with a negative sign of the absolute value of the coherence matrix as the weight. However, the inverse matrix with a negative sign of the absolute value of the coherence matrix is only one example. The image analysis device may use other parameters as the weight. When the inverse matrix with a negative sign of the absolute value of the coherence matrix is used as a weight, the image analysis device may include a coherence matrix calculation unit. The coherence matrix calculation unit calculates the coherence matrix as follows, for example
That is, the coherence matrix calculation unit calculates a coherence matrix C for S SAR images (complex images: including amplitude and phase information) stored in the SAR matrix storage, for example. For example, suppose that a pair of SAR images is (p, q) and components of the coherence matrix C are cp, q. p and q are values less than or equal to S, respectively, indicating one of the SAR images. The coherence matrix calculation unit calculates the phase θp, q (specifically, the phase difference) for the pair of SAR images. Then, the coherence matrix calculation unit sets the value obtained by averaging exp(−j θp, q) for a plurality of pixels in a predetermined area including pixels to be calculated as coherence to the component cp, q of the coherence matrix C.
The coherence matrix calculation unit may also average Ap·Aq·exp (−jθp, q) with Ap as intensity in SAR image p and Aq as intensity in SAR image q. The coherence matrix calculation unit may divide each element of the matrix obtained as the average of Ap·Aq·exp(−jθp, q) by a value which is obtained by dividing a sum of diagonal components of the matrix by S. The coherence matrix calculation unit may multiply a diagonal matrix from left to right, wherein the diagonal matrix is a matrix having diagonal components which are the −½ power of diagonal components of the matrix obtained as the average of Ap·Aq·exp(−jθp, q).
This method of calculating the coherence matrix is just one example, and the coherence matrix calculation unit can calculate the coherence matrix using various methods.
Several expressible functions may be used as the observed signal evaluation function. In the following example embodiments, the observed signal evaluation function is, as an example, an evaluation function (for example, a product-sum operation formula) including at least a noise-free pixel value x, i.e., an estimated pixel value x as an evaluation value, and a weight W (in the following example embodiments, an inverse matrix |C|−1 with a negative sign, i.e. −|C|−1, of the absolute value |C| of the coherence matrix C and an observed pixel value y as parameters.
The image analysis device may include a spatial correlation prediction unit. The spatial correlation prediction unit predicts the correlation coefficients of pixels in an image and generates a spatial correlation matrix K which is a matrix whose elements are correlation coefficients. The spatial correlation matrix K is a matrix whose elements are the correlation coefficients of pixels. However, the spatial correlation matrix K is an example of an expression of spatial correlation, and other expressions may be used.
For example, the spatial correlation prediction unit generates (calculates) spatial correlation (correlation of pixels in the area to be analyzed) based on prior information (known data) regarding the area to be analyzed (which may be the entire area) in the SAR image.
The prior information is given to the spatial correlation prediction unit in advance. The following information, for example, is illustrated as prior information.
The schematic shape of the object to be analyzed (for example, a structure that is the object of displacement analysis) in the image analysis system to which the image analysis device 10 is applied. As an example, when the object is a long and narrow shaped structure such as a steel tower, the spatial correlation is generated so that the pixels are smoothly correlated in the extending direction of the structure.
Weights in the graph (nodes: pixels, edges (weights): correlations) for which the correlations are known. When using this prior information, a spatial correlation matrix K whose elements are values based on the weights is generated.
Information obtained from SAR images obtained in the past. That is, the correlation of pixels in known images of the area to be analyzed.
The spatial correlation evaluation function including the spatial correlation matrix K, which is generated based on prior information corresponds to a function representing smooth distribution of phase differences in an image.
Prior information is not limited to the above examples, but can be other types of information as long as the spatial correlation of the object to be analyzed can be predicted.
Several expressible functions may be used as the spatial correlation evaluation function. In the following example embodiments, the spatial correlation evaluation function is, as an example, an evaluation function (for example, a product-sum operation formula) including at least a pixel value x as an evaluation value, and a spatial correlation matrix K (actually, the inverse matrix K−1 of the spatial correlation matrix K) as a parameter, which corresponds to a kind of weight.
In the following example embodiments, the image analysis device does not perform an optimizing process using the observed signal evaluation function for all pixels (pixel 1 to pixel N) in a predetermined area, but rather performs an optimizing process using the observed signal evaluation function for M (M<N) pixels selected from all pixels.
As mentioned above, the evaluation function includes the observation signal evaluation function and the spatial correlation evaluation function, and the image analysis device performs an optimizing process after performing reduced-dimensional decomposition (low-rank approximation) of the spatial correlation matrix K in the spatial correlation evaluation function. In detail, the image analysis device substantially reduced-dimensional decomposes the inverse matrix K−1 of the spatial correlation matrix K. The image analysis device can, for example, perform reduced-dimensional decomposition using a method called Nystrom approximation. In that case, the image analysis device reduced-dimensional decomposes the inverse matrix K−1 of the spatial correlation matrix K using equation (1).
K
−1
≃Λ−VGV
T [Math. 1]
In equation (1), G is a d×d matrix (d<N). V is an S×d matrix (V=v1, v2, . . . , vd). Λ is an N×N diagonal matrix.
A SAR image storage 200 stores SAR images in advance. The weights used in the optimizing process are stored in advance in a weight matrix storage 300. In this example embodiment, a weight matrix as a weight is stored in the weight matrix storage 300. The SAR image storage 200 and the weight matrix storage 300 may be external to or included in the image analysis device 10.
The pixel selection unit 110 selects multiple pixels in one SAR image from multiple SAR images in which the same area is recorded. Then, the pixel selection unit 110 identifies pixels in other SAR images that are in the same position as the pixel in the above one SAR image. The pixel selection unit 110 specifies the position of the identified pixel. The dimension reduction unit 120 maps the pixel values at the position specified by the pixel selection unit 110 to a low-dimensional space. The expanding unit 130 maps the pixel values at the position specified by the pixel selection unit 110 from the low-dimensional space to the original pixel space.
The optimization unit 140 uses the weight matrix for a pair of the above one SAR image and the other SAR image. The pixel selection unit 110 specifies a common position in the one SAR image and the other SAR image. Then, the optimization unit 140 derives a pixel value (represented by a complex vector; and corresponding to xsamp,s described below) having a phase that is closer to both a calculation result output from the expanding unit 130 and a pixel value (observed pixel value) of the observed pixel at the position specified by the pixel selection unit 110.
The optimization unit 140 brings the complex vector having phase closer to the calculation result output from the expanding unit 130 by optimizing (for example, maximizing) the spatial correlation evaluation function. The optimization unit 140 brings the complex vector closer to the pixel value of the observed pixel at the position specified by the pixel selection unit 110 by optimizing (for example, maximizing) the observed signal evaluation function. When the observation signal evaluation function is optimized, the complex vector approaches the pixel value of the observed pixel in the area where the noise is relatively small.
Although the pixel selection unit 110 specifies the position of the pixel in this example embodiment, the pixel selection unit 110 may read the corresponding pixel (specifically, the pixel value) from the SAR image storage 200 and supply the read pixel value to the dimension reduction unit 120, the expanding unit 130 and the optimization unit 140.
Next, the operation of the image analysis device 10 in this example embodiment will be explained with reference to the flowchart of
The pixel selection unit 110 generates M (M<N) random numbers (step S101). N is the total number of pixels in the SAR image. M is a value greater than or equal to 2.
Hereinafter, nm is the pixel number of the pixel to be selected (m: 1 to M). It should be noted that nm will be one of 1 to M. S is the total number of SAR images. Let s be the image number (s: 1 to S). In addition, nm may be expressed as samp. The pixel selection unit 110 uses each of generated multiple random numbers as the pixel number of the pixel to be selected. In other words, the pixel selection unit 110 selects pixels randomly.
The expanding unit 130 multiplies the previous fs by Vsamp (step S102). The previous fs means the result of the operation of step S104 performed immediately before. Each of Vsamp (samp: 1 to M) is a row vector generated by taking n1 rows, n2 rows, . . . , nM rows from the matrix V. That is, for each pixel in the row corresponding to pixel numbers n1, n2, . . . , nM, the expanding unit 130 calculates Vsamp fs using the corresponding row vector in matrix V. Hereinafter, Vsamp fs is sometimes referred to as spatial correlation phase estimate.
As described below, fs corresponds to a value (complex data) obtained by multiplying GVT by xsamp,s in the above equation (1). Thus, Vsamp fs corresponds to a value (complex data) obtained by multiplying VGV by xTsamp,s in equation (1). Since “s” in “samp,s” is the image number, for example, “samp,1” is an index that specifies the selected pixel in the SAR image with image number 1.
The optimization unit 140 optimizes (for example, maximizes) the evaluation function (step S103). By optimizing the evaluation function, xsamp,s approaches the optimal value. As mentioned above, the evaluation function is an evaluation function that merges the observed signal evaluation function which includes at least the weight W and the observed pixel value y as parameters in addition to the pixel value x which is assumed to be noise-free as an evaluation value, and the spatial correlation evaluation function whose parameter is the spatial correlation matrix K (actually, the inverse matrix K−1 of the spatial correlation matrix K).
In the process of step S103, the optimization unit 140 performs a process to bring xsamp,s closer to ysamp,s weighted by Wsamp (in this example embodiment, the inverse matrix of the S×S coherence matrix corresponding to the selected pixels) with respect to the observed signal evaluation function and to the spatial correlation phase estimate Vsamp fs with respect to the spatial correlation evaluation function.
The dimension reduction unit 120 compresses xsamp,s optimized by the optimization unit 140 using GVTsamp (step S104). Namely, the dimension reduction unit 120 multiplies xsamp,s by GVTsamp. The dimension reduction unit 120 further multiplies the multiplication result by (N/M). The value by which the multiplication result is multiplied by (N/M) is expressed as fs. Since steps S102 to S104 are executed for M pixels selected from all N pixels, the multiplication result between xsamp,s and GVTsamp is (M/N) times the original value. The dimension reduction unit 120 multiplies the multiplication result by (N/M) to restore the original value.
For example, the dimension reduction unit 120 incorporates a memory. The dimension reduction unit 120 is configured to temporarily store fs in the memory (not shown in
When the termination condition is satisfied, the process is terminated (step S106). When the termination condition is not satisfied, the process returns to step S101. In the process of next step S101, the pixel selection unit 110 selects a different pixel group from the previously selected pixel group (which may partially overlap).
The termination condition is, for example, that Vsamp fs as the spatial correlation phase estimate output from the expanding unit 130 is less than a predetermined value relative to the value obtained in the previous step S102. Namely, the termination condition is that the amount of variation in Vsamp fs is smaller. In other words, the termination condition is that Vsamp fs is determined to have converged to the optimal value. The termination condition may also be that xsamp,s output from the optimization unit 140 is less than a predetermined value relative to the value obtained in the previous step S103. In other words, the termination condition may be that the amount of variation in xsamp,s is smaller. That is, the termination condition may be that xsamp,s is determined to have converged to the optimal value. Such termination conditions are examples, and other conditions, for example, the condition that the process of S101 to S104 is executed a predetermined number of times, may be used.
As explained above, in this example embodiment, the dimension reduction unit 120 calculates GVTsamp xsamp,s. The expanding unit 130 expands GVTsamp xsamp,s multiplied by (N/M) (fs) by V. The optimization unit 140 optimizes the spatial correlation evaluation function using a different pixel group than the previously selected pixel group with respect to the observed signal evaluation function. As the dimension reduction unit 120, the expanding unit 130, and the optimization unit 140 repeat the process, xsamp,s converges to the optimal value. [0055] Namely, the image analysis device 10 obtains a phase close to the phase of the observed signal by optimizing the spatial correlation evaluation function, while the phase difference is smoothly distributed by optimizing the observation signal evaluation function. Thus, the degree of phase noise reduction can be increased.
In addition, since the inverse matrix K−1 of the spatial correlation matrix K is reduced-dimensional decomposed, the amount of memory used can be reduced compared to the case where it is not reduced-dimensional decomposed. Specifically, the used amount of memory is (number of dimensions of fs×number of images)+number of selected pixels×(number of images)).
Further, the optimization unit 140 performs the optimizing process on the pixels randomly selected from all pixels with respect to the observation signal evaluation function. Thus, the amount of calculation in the optimization unit 140 is reduced.
In this example embodiment, the pixel selection unit 110 selects pixels randomly, but it does not have to select randomly. For example, the pixel selection unit 110 may select M pixels from N pixels in a predetermined area according to a predetermined rule.
However, as the processing by the dimension reduction unit 120 and the expanding unit 130 is repeated, the spatial correlation phase estimate converges to an optimal phase where the phase changes smoothly.
This example embodiment is suitable for applications where, for example, spatially low-frequency phase components are extracted robustly to noise. A spatially low-frequency phase component is, for example, a phase delay due to moisture in the atmosphere. Since this phase delay is a phase unrelated to displacement, the performance of displacement analysis can be improved by performing estimation in this example embodiment and excluding the phase from the observed phase when displacement analysis is performed, for example.
In the image analysis device 10A, the expanding unit 130 outputs an absolute value or phase or both of the spatial correlation phase estimate Vsamp fs to the storage unit 131. The storage unit 131 stores the absolute value or phase or both of the spatial correlation phase estimate Vsamp fs. The expanding unit 130 may output the absolute value or phase or both of the spatial correlation phase estimate Vsamp fs to the storage unit 131 each time one loop process (process of steps S101 to S104) is executed, and the storage unit 131 may store, for example, the latest absolute value or phase or both of the spatial correlation phase estimate Vsamp fs. The expanding unit 130 may output the absolute value or phase or both of the spatial correlation phase estimate Vsamp fs when the loop process has been executed a predetermined number of times. In that case, the storage unit 131 stores data when the data is output from the expanding unit 130.
The optimization unit 140 outputs an absolute value or phase or both of xsamp,s to the storage unit 141. The storage unit 141 stores the absolute value or phase or both of xsamp,s. The optimization unit 140 may output the absolute value or phase or both of xsamp,s to the storage unit 141 each time one loop process is executed, and the storage unit 141 may, for example, store the latest absolute value or phase or both of the xsamp,s. The optimization unit 140 may output the absolute value or phase or both of xsamp,s when the loop process has been executed a predetermined number of times. In that case, the storage unit 141 stores data when the data is output from the optimization unit 140.
The absolute value or phase stored in the storage unit 131 or the storage unit 141 can be used for various uses. For example, the phase can be utilized as a phase from which noise has been removed. The absolute value can be used as a value (reliability) representing reliability of noise removal.
Although
Next, the operation of the image analysis device 20 in this example embodiment will be explained with reference to the flowchart of
The smoothing unit 150 takes fs as the value obtained by smoothing fnew using following equation (2), for example. In equation (2), fnew is the output of the dimension reduction unit 120. That is, fnew corresponds to fs in the first example embodiment. fold is output of the dimension reduction unit 120 in one previous loop process (steps S101 to S104). That is, fold corresponds to fs calculated by the dimension reduction unit 120 in the previous process. fold was stored in the dimension reduction unit 120. ρ (0<ρ<1) is a reflection ratio of fold. ρ is a predefined value.
f
s=(1−ρ)fold+ρfnew (2)
In the first example embodiment, in each loop process, the pixel selection unit 110 randomly selects pixels. As a result, the value of fs calculated in step S104 in one loop process may have changed significantly from the value of fs calculated in step S104 in the previous loop process.
In this example embodiment, it is expected that the value of fs passed to the expanding unit 130 will be closer to the value of fs passed last time. Thus, it is more likely to converge to the appropriate spatial correlation phase estimate earlier.
As shown in
[Math. 2]
x
temp=Σs′≠swn
In equation (3), the second term on the right side (χmeans,nm,s) corresponds to the output of the expanding unit 130. That is, χ corresponds to Vsamp fs as the spatial correlation phase estimate in the first example embodiment.
In equation (3), the first term on the right side corresponds to a specific example of the observation signal evaluation function in the first example embodiment. In the first term, s and s′ correspond to the image numbers of the images selected from S SAR images. As mentioned above, nm is synonymous with “samp” in the first example embodiment.
Next, the operation of the optimization unit 140 in this example embodiment will be explained with reference to the flowchart of
The optimization unit 140 initializes the variable i to 0 (step S131). Then, the optimization unit 140 sets the variable i to +1 (step S132). The phase averaging unit 142 in the optimization unit 140 performs a phase averaging process using equation (3) to obtain a temporary value xtemp (step S133).
The activation unit 143 applies an activation process to xtemp to obtain xnm,s (step S134). xnm,s is synonymous with xsamp,s in the first example embodiment. In the equation (3), “n” in xnm,s is represented by a subscript. xnm,s is the same as the variable whose “n” is represented by a subscript in the equation (3).
In the process of step S134, the activation unit 143 performs a nonlinear transformation of xtemp using the nonlinear function g(a), as in equation (4) below.
The following function can be used as the nonlinear function g(a).
g(a)=I1(2a)/I0(2a) (5)
g(a)=tanh(a) (7)
I0 and I1 in equation (5) are first kind modified Bessel functions of order 0 and 1, respectively. When using equation (5), the value of g(a) approaches 1 as the value of a increases. Even when using equation (6) or equation (7), the value of g(a) approaches 1 as the value of a increases. Namely, in this example embodiment, the maximum value (in absolute value) of xnm,s is limited to 1.
Not limited to the functions illustrated in equations (5) to (7), the activation unit 143 can use other functions such that the output value asymptotically approaches from 0 to a specific positive value as a increases.
After the process of step S134 is executed, the optimization unit 140 checks whether i=S or not (step S135). That is, the optimization unit 140 checks whether steps S133, S134 have been executed for all SAR images. When i=S, the process is terminated. Otherwise, return to step S132.
The phase averaging unit 142 and the activation unit 143 may execute the process of steps S133, S134 sequentially, or they may execute the process of steps S133, S134 simultaneously (in parallel) for multiple i. The method of parallel execution is suitable for parallel computation by a GPU (Graphics Processing Unit) or CPU (Central Processing Unit), etc., and can speed up the optimizing process.
The storage unit 131 or the storage unit 141 or both, as shown in
The absolute value of the output (xnm,s) of the optimization unit 140 in this example embodiment can be utilized as a value (reliability) representing the reliability of noise removal. This is because the image analysis device 30 is configured to increase the absolute value of xtemp when both the degree of spatial smoothness and the degree of match between x (xnm,s) as the evaluation value and the observed pixel value y (ynm,s) are high. Namely, in equation (3), the value of the first term is larger the closer the phase of the observed pixel value y is to the phase of x. In addition, the value of the second term is the smoothed value of x calculated in the previous loop process (steps S101 to S104). When the value of the second term and the value of the first term are added, the value of xtemp is larger in case the value based on the second term and the value based on the first term have close phases. Thus, the absolute value of xtemp is larger the higher the reliability. Therefore, the absolute value of xnm,s is also larger the higher the reliability.
In particular, when the storage unit 141 is installed, the absolute value of xnm,s stored in the storage unit 141 can be effectively used as the reliability.
Further, the reliability in this example embodiment can also be applied to an application in which areas where the analysis results cannot be trusted are displayed visually after displacement analysis or the like has been performed based on the estimated value of the phase.
The pre-filter unit 160 calculates a matrix F according to following equation (8), for example. In equation (8), “Wfilter” is a predetermined weight. “n∈Ω (nm)” is any of the M selected pixels. Since the numerator of equation (8) includes a product of yn,s and the conjugate complex number of yn,s′, equation (8) means that a phase difference between neighboring observed pixels are averaged by Wfilter. A matrix (hereinafter, expressed as matrix Γ which does not include a subscript) shown in equation (8) represents some noise feature in the observed image. The matrix Γ is reflected in the evaluation function used by the phase averaging unit 142 in this example embodiment.
The phase averaging unit 142 performs the same process as step S133 in
[Math. 6]
x
temp=Σs′≠swn
In this example embodiment, since the pre-filter 160 determines a weight matrix that reflects an average of the phase difference between the neighboring observation pixels, the optimal weight can be determined from the information in the observation image. In addition, since the phase averaging unit 142 calculates xtemp using an evaluation function that includes the matrix Γ reflecting the average of the phase difference between the neighboring observation pixels, the degree of phase noise reduction can be further increased.
The weight calculation unit 170 calculates a weight Wnm as shown in following equation (10). That is, the weight calculation unit 170 calculates the weight Wnm based on the matrix Γ.
[Math. 7]
w
n
=−|Γn
Other operations in the image analysis device 50 are the same as those in the fourth example embodiment.
The storage device 1001 is, for example, a non-transitory computer readable media. The non-transitory computer readable medium is one of various types of tangible storage media. Specific examples of the non-transitory computer readable media include a magnetic storage medium (for example, hard disk), a magneto-optical storage medium (for example, magneto-optical disk), a compact disc-read only memory (CD-ROM), a compact disc-recordable (CD-R), a compact disc-rewritable (CD-R/W), and a semiconductor memory (for example, a mask ROM, a PROM (programmable ROM), an EPROM (erasable PROM), a flash ROM). When a rewritable data storage medium is used as the storage device 1001, the storage device 1001 can also be used as the SAR image storage 200 and the weight matrix storage 300. The storage device 1001 can also be used as the storage units 131, 141.
The program may be stored in various types of transitory computer readable media. The transitory computer readable medium is supplied with the program through, for example, a wired or wireless communication channel, i.e., through electric signals, optical signals, or electromagnetic waves.
A memory 1002 is a storage means implemented by a RAM (Random Access Memory), for example, and temporarily stores data when the CPU 1000 executes processing. It can be assumed that a program held in the storage device 1001 or a temporary computer readable medium is transferred to the memory 1002 and the CPU 1000 executes processing based on the program in the memory 1002.
A part of or all of the above example embodiments may also be described as, but not limited to, the following supplementary notes.
(Supplementary note 1) An image analysis device comprising:
(Supplementary note 2) The image analysis device according to Supplementary note 1, wherein
(Supplementary note 3) The image analysis device according to Supplementary note 1 or 2, wherein
(Supplementary note 4) The image analysis device according to any one of Supplementary notes 1 to 3, further comprising
(Supplementary note 5) The image analysis device according to any one of Supplementary notes 1 to 4, further comprising
(Supplementary note 6) The image analysis device according to any one of Supplementary notes 1 to 5, further comprising
(Supplementary note 7) The image analysis device according to any one of Supplementary notes 1 to 4, further comprising
(Supplementary note 8) The image analysis device according to any one of Supplementary notes 1 to 4 or Supplementary note 7, further comprising
(Supplementary note 9) An image analysis method comprising:
(Supplementary note 10) The image analysis method according to Supplementary note 9,
(Supplementary note 11) A computer readable recording medium storing an image analysis program, wherein
(Supplementary note 12) The recording medium according to Supplementary note 11,
(Supplementary note 13) An image analysis program causing a computer to execute:
(Supplementary note 14) The image analysis program according to Supplementary note 13,
Although the invention of the present application has been described above with reference to example embodiments, the present invention is not limited to the above example embodiments. Various changes can be made to the configuration and details of the present invention that can be understood by those skilled in the art within the scope of the present invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/036843 | 9/29/2020 | WO |