Semantic segmentation method and system for high-resolution remote sensing image based on random blocks

Information

  • Patent Grant
  • 11189034
  • Patent Number
    11,189,034
  • Date Filed
    Friday, September 4, 2020
    4 years ago
  • Date Issued
    Tuesday, November 30, 2021
    2 years ago
Abstract
A semantic segmentation method and system for a high-resolution remote sensing image based on random blocks. In the semantic segmentation method, the high-resolution remote sensing image is divided into random blocks, and semantic segmentation is performed for each individual random block separately, thus avoiding overflow of GPU memory during semantic segmentation of the high-resolution remote sensing image. In addition, feature data in random blocks neighboring each random block incorporated into the process of semantic segmentation, overcoming the technical shortcoming that the existing segmentation method for the remote sensing image weakens the correlation within the image. Moreover, in the semantic segmentation method, semantic segmentation is separately performed on mono-spectral feature data in each band of the high-resolution remote sensing image, thus enhancing the accuracy of sematic segmentation of the high-resolution remote sensing image.
Description
CROSS REFERENCE TO RELATED APPLICATION

This application is a U.S. patent application, which claims priority to Chinese Patent Application No. 202010708331.6, filed on Jul. 22, 2020, the contents of which are herein incorporated by reference.


TECHNICAL FIELD

The present disclosure relates to the field of image processing technologies, and in particular, to a semantic segmentation method and system for a high-resolution remote sensing image based on random blocks.


BACKGROUND

Semantic segmentation is one of the hottest topics in the field of computer vision at present, which aims to classify each pixel of an image as a predefined category. Various semantic segmentation models for different tasks have been proposed in succession and attained remarkable results. Likewise, in the field of high-resolution remote sensing, semantic segmentation also plays an important role in remote sensing images. For example, the proportion of water resources in the ecosystem is monitored in real time by detecting the water resources such as rivers, lakes, and glaciers; and urban development degree is assessed by detecting the distribution of city lights, providing guidance for further development of the city.


In the past ten years, increasingly more high-resolution remote sensing satellites have been launched into space, and have fed back numerous high-resolution remote sensing images incessantly. These remote sensing images are cheap to acquire and rich in content, and further can be dynamically updated. However, different from conventional computer images, these high-resolution remote sensing images are multispectral, and different ranges of spectrums show different sensitivities to corresponding ground objects in the image. Therefore, the high-resolution remote sensing images can be used for detailed detection of objects.


At present, semantic segmentation models for the remote sensing images are constructed basically based on conventional image semantic segmentation networks, among which a Full Convolution Network (FCN) is mainly used. In these means, an image with the random size can be input into the FCN, and is reduced by half in size after going through a convolutional layer and a pooling layer until the image is reduced to the minimum size, to generate a heat map. Finally, the size of the image is restored by means of upsampling, and a pixel-level probability plot is the output to achieve the purpose of predicting each pixel. The well-known U-Net network is improved from such a framework (namely, the FCN). However, it is not difficult to find that although having made great progress in the task of semantic segmentation for common images, the semantic segmentation network is unsatisfactory in processing the multispectral remote sensing images with a large data amount. Common high-resolution remote sensing images have a huge data amount compared to the common images, and usually a single remote sensing image can be gigabyte-sized. If such an image is directly applied in an existing network, overflow of GPU memory is likely to occur. On the other hand, if remote sensing images are directly segmented into blocks, the correlation within the image will lose. Moreover, because the high-resolution remote sensing images are taken from long distances in the sky, the existing network often fails to identify specific categories of ground objects with the same colors, for example, grasslands and forests. How to overcome the technical shortcoming that the existing semantic segmentation method for the high-resolution remote sensing image causes the overflow of GPU memory and is unable to identify objects with the same or similar colors becomes a technical problem, so as to improve the accuracy of semantic segmentation of the high-resolution remote sensing image.


SUMMARY

The objective of the present disclosure is to provide a semantic segmentation method and system for a high-resolution remote sensing image based on random blocks, so as to overcome the technical shortcoming that the existing semantic segmentation method for the high-resolution remote sensing image causes the overflow of GPU memory and is unable to identify objects with the same or similar colors, thus improving the accuracy of semantic segmentation of the high-resolution remote sensing image.


To achieve the above purpose, the present disclosure provides the following technical solutions.


A semantic segmentation method for a high-resolution remote sensing image based on random blocks includes:


partitioning a high-resolution remote sensing image into a plurality of random blocks;


extracting mono-spectral feature data in each band from each random block;


performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block; and


fusing mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.


Optionally, the partitioning the high-resolution remote sensing image into random blocks, to obtain a plurality of random blocks specifically includes:


randomly selecting a pixel point do in a central area of the high-resolution remote sensing image;


cropping a square from the high-resolution remote sensing image to obtain a random block p0, where the square is centered at the pixel point d0 and has a randomly generated side length of len(p0);


further cropping squares which are respectively centered at the four vertices d01, d02, d03, and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03 and p04 which neighbor the random block p0, where the side length of each square is in a range of 512≤len(·)≤1024; and


repeating the step of “further cropping squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.


Optionally, the supervised semantic segmentation network includes an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module;


the encoder, the RNN network, and the decoder are successively connected; and


the first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.


Optionally, the performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block specifically includes:


extracting abstract features from mono-spectral feature data in the jth bands of the ith random block pi and a random block pim neighboring the ith random block pi by using the encoder and according to a formula







F
i
j

=

En


(

P
i
j

)










F
im
j

=

En


(

P
im
j

)



,

m
=
1

,
2
,
3
,
4
,





to obtain abstract feature maps regarding the jth bands of the ith random block pi and the random block pim neighboring the ith random block pi, where Fij denotes an abstract feature map regarding the jth band of the random block pi, En(□) denotes the encoder, and Fimj denotes an abstract feature map regarding the jth band of the mth random block pim neighboring the random block pi;


based on the abstract feature maps regarding the jth bands of the ith random block pi and the random block pim neighboring the ith random block pi, establishing neighborhood association between abstract feature maps regarding the jth bands of the ith random block pi and four random blocks neighboring the ith random block pi via the RNN network and by using the formula







h

F
im
j


=

{





ϕ


(


uF
im
j

+
b

)


,




m
=
1







ϕ


(


uF
im
j

+

wh

F

i


(

m
-
1

)


j


+
b

)


,





m
=
2

,
3
,
4




}








h

F
i
j


=

ϕ


(


uF
i
j

+

wh

F
i4
j


+
b

)









y

F
im
j


=

σ


(


Vh

F
im
j


+
c

)










y

F
i
j


=

σ


(


Vh

F
i
j


+
c

)



,





to obtain abstract features of the jth band of the ith random block pi after the neighborhood association, where hFimj, hFi(m-1)j, and hFi4j respectively denote outputs of abstract feature maps Fimj regarding the jth bands of the mth random block pim, the (m−1)th random block pi(m-1), and the fourth random block pi4 which neighbor the ith random block pi at the hidden layer in the RNN network; hFij denotes an output of the abstract feature map Fij regarding the jth band of the ith random block pi at the hidden layer in the RNN network; yFimj abstract features of the jth band of the mth random block pim neighboring the ith random block pi after the neighborhood association; yFij denotes the abstract features of the jth band of the ith random block pi after the neighborhood association; ϕ(·) denotes a first nonlinear function and σ(·) denotes a second nonlinear function; u denotes a first transposed matrix, V denotes a second transposed matrix, and w denotes a third transposed matrix; and b denotes a first bias term and c denotes a second bias term;


decoding, with the decoder and by using a formula pij=De(yFij), the abstract features of the jth band of the ith random block that are obtained after the neighborhood association, to obtain a mono-spectral semantic segmentation probability plot pij regarding the jth band of the ith random block pi; and


supervising feature data output by the encoder, feature data output by the RNN network, and feature data output by the decoder respectively with the first, second, and third supervision modules.


Optionally, before the fusing mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block, the method further includes:


constructing a weight training network for the plurality of mono-spectral semantic segmentation probability plots, where the weight training network includes a plurality of parallel supervised semantic segmentation networks and a convolution fusion module; and


based on the multiple pieces of mono-spectral feature data of the random block, performing weight training on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain a trained weight.


A semantic segmentation system for a high-resolution remote sensing image based on random blocks includes:


an image partitioning module, configured to partition a high-resolution remote sensing image into a plurality of random blocks;


a mono-spectral feature data extraction module, configured to extract mono-spectral feature data in each band from each random block;


a semantic segmentation module, configured to: perform semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block; and


a fusion module, configured to fuse mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.


Optionally, the image partitioning module specifically includes


a pixel point selection submodule, configured to randomly select a pixel point d0 in a central area of the high-resolution remote sensing image;


a first image partitioning submodule, configured to crop a square from the high-resolution remote sensing image to obtain a random block p0, where the square is centered at the pixel point d0 and has a randomly generated side length of len(p0);


a second image partitioning submodule, configured to further crop squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len (p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0, where the side length of each square is in a range of 512≤len(·)≤1024; and


a third image partitioning submodule, configured to repeat the step of “further cropping squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.


Optionally, the supervised semantic segmentation network includes an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module;


the encoder, the RNN network, and the decoder are successively connected; and


the first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.


Optionally, the semantic segmentation module specifically includes:


an encoding submodule, configured to extract abstract features from mono-spectral feature data in the jth bands of the ith random block pi and a random block pim neighboring the ith random block pi by using the encoder and according to a formula







F
i
j

=

En


(

P
i
j

)










F
im
j

=

En


(

P
im
j

)



,

m
=
1

,
2
,
3
,
4
,





to obtain abstract feature maps regarding the jth bands of the ith random block pi and the random block pim neighboring the ith random block pi, where Fij denotes an abstract feature map regarding the jth band of the random block pi, En(□) denotes the encoder, and denotes an abstract feature map regarding the jth band of the mth random block pim neighboring the random block pi;


a neighborhood feature association submodule, configured to: based on the abstract feature maps regarding the jth bands of the ith random block pi and the random block pim neighboring the ith random block pi, establish neighborhood association between abstract feature maps regarding the jth bands of the ith random block pi and four random blocks neighboring the ith random block pi via the RNN network and by using the formula







h

F
im
j


=

{





ϕ


(


uF
im
j

+
b

)


,




m
=
1







ϕ


(


uF
im
j

+

wh

F

i


(

m
-
1

)


j


+
b

)


,





m
=
2

,
3
,
4




}








h

F
i
j


=

ϕ


(


uF
i
j

+

wh

F
i4
j


+
b

)









y

F
im
j


=

σ


(


Vh

F
im
j


+
c

)










y

F
i
j


=

σ


(


Vh

F
i
j


+
c

)



,





to obtain abstract features of the jth band of the ith random block pi after the neighborhood association, where hFimj, hFi(m-1)j, and hFi4j respectively denote outputs of abstract feature maps Fimj regarding the jth bands of the mth random block pim, the (m−1)th random block pi(m-1), and the fourth random block pi4 which neighbor the ith random block pi at the hidden layer in the RNN network; hFij denotes an output of the abstract feature map Fij regarding the jth band of the ith random block pi at the hidden layer in the RNN network; yFimj denotes abstract features of the jth band of the mth random block pim, neighboring the ith random block pi after the neighborhood association; yFij denotes the abstract features of the jth band of the ith random block pi after the neighborhood association; ϕ(·) denotes a first nonlinear function and σ(·) denotes a second nonlinear function; u denotes a first transposed matrix, V denotes a second transposed matrix, and w denotes a third transposed matrix; and b denotes a first bias term and c denotes a second bias term;


a decoding submodule, configured to decode, with the decoder and by using a formula pij=De(yFij), the abstract features of the jth band of the ith random block that are obtained after the neighborhood association, to obtain a mono-spectral semantic segmentation probability plot pij regarding the jth band of the ith random block pi; and


a supervision submodule, configured to supervise feature data output by the encoder, feature data output by the RNN network, and feature data output by the decoder respectively with the first, second, and third supervision modules.


Optionally, the semantic segmentation system for a high-resolution remote sensing image based on random blocks further includes:


a weight training network construction module, configured to construct a weight training network for the plurality of mono-spectral semantic segmentation probability plots, where the weight training network includes a plurality of parallel supervised semantic segmentation networks and a convolution fusion module; and


a weight training module, configured to: based on the multiple pieces of mono-spectral feature data of the random block, perform weight training on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain the trained weight.


According to specific embodiments provided by the present disclosure, the present disclosure discloses the following technical effects.


The present disclosure discloses a semantic segmentation method for a high-resolution remote sensing image based on random blocks, where the semantic segmentation method includes: partitioning a high-resolution remote sensing image into a plurality of random blocks; extracting mono-spectral feature data in each band from each random block; performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block; and fusing mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block. The present disclosure divides the high-resolution remote sensing image into random blocks, and performs semantic segmentation for each individual random block separately, thus avoiding overflow of GPU memory during semantic segmentation of the high-resolution remote sensing image. In addition, the present disclosure incorporates feature data in random blocks neighboring each random block in the process of semantic segmentation, overcoming the technical shortcoming that the existing segmentation method for the remote sensing image weakens the correlation within the image. Moreover, the present disclosure performs semantic segmentation separately on mono-spectral feature data in each band of the high-resolution remote sensing image, so that objects with the same or similar colors can be accurately identified according to the characteristic that different ground objects have different sensitivities to light with different spectrums, thus enhancing the accuracy of sematic segmentation of the high-resolution remote sensing image.





BRIEF DESCRIPTION OF DRAWINGS

To describe the technical solutions in the embodiments of the present disclosure or in the related art more clearly, the following briefly describes the accompanying drawings required for the embodiments. Apparently, the accompanying drawings in the following description show merely some embodiments of the present disclosure, and those of ordinary skill in the art may still derive other accompanying drawings from these accompanying drawings without creative efforts.



FIG. 1 is a flowchart of a semantic segmentation method for a high-resolution remote sensing image based on random blocks provided by the present disclosure;



FIG. 2 is a principle diagram of image partitioning into random blocks provided by the present disclosure;



FIG. 3 is a schematic structural diagram of a supervised semantic segmentation network provided by the present disclosure; and



FIG. 4 is a schematic structural diagram of a weight training network provided by the present disclosure.





DETAILED DESCRIPTION

Certain embodiments herein provide a semantic segmentation method and system for a high-resolution remote sensing image based on random blocks, so as to overcome the technical shortcoming that the existing semantic segmentation method for the high-resolution remote sensing image causes the overflow of GPU memory and is unable to identify objects with the same or similar colors, thus improving the accuracy of semantic segmentation of the high-resolution remote sensing image.


To make the foregoing objective, features, and advantages of the present disclosure clearer and more comprehensible, the present disclosure is further described in detail below with reference to the accompanying drawings and specific embodiments.


To achieve the above objective, the present disclosure provides the following solutions: Generally, a high-resolution remote sensing image covers a very wide geographic area and has a huge data amount which may be gigabyte-sized. In addition, the high-resolution remote sensing image usually contains four or more spectral bands, among which a blue band from 0.45-0.52 μm, a green band from 0.52-0.60 μm, a red band from 0.62-0.69 μm, and a near-infrared band from 0.76-0.96 μm are the most common spectral bands. However, the exiting semantic segmentation network seldom considers the effects of the different bands on semantic segmentation. In addition, limited by the receptive field, most convolutional neural networks (CNNs) for semantic segmentation can only acquire limited context information, easily resulting in divergence in classification of visually similar pixels. Therefore, certain embodiments herein focus on the effects with different spectrums on semantic segmentation, and employs a recurrent neural network (RNN) to enhance dependency between pixels.


As shown in FIG. 1, a semantic segmentation method for a high-resolution remote sensing image based on random blocks, where the semantic segmentation method includes:


Step 101: Partition a high-resolution remote sensing image into a plurality of random blocks.


The step of partitioning a high-resolution remote sensing image into a plurality of random blocks specifically includes: randomly selecting a pixel point d0 in a central area of the high-resolution remote sensing image; cropping a square from the high-resolution remote sensing image to obtain a random block p0, where the square is centered at the pixel point d0 and has a randomly generated side length of len(p0); further cropping squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0, where the side length of each square is in a range of 512≤len(·)≤1024; and repeating the step of “further cropping squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.


Specifically, as shown in FIG. 2, it is assumed that the height and the width of an input high-resolution remote sensing image custom character are respectively H(custom character) and custom character(custom character). First, a pixel point d0 is randomly selected from the high-resolution remote sensing image, and the position of d0 may be denoted as a vector (x0, y0). A square is randomly cropped from the image at the point d0, to generate a random block custom character0. The side length of the random block custom character0 is denoted as custom character(custom character0) The four vertices of the block custom character0 from the upper left corner to the lower right corner in a clockwise direction are d01, d02, d03, and d04 respectively:







d
01

:

(



x
0

-


1
2



len


(

P
0

)




,


y
0

+


1
2



len


(

P
0

)





)








d
02

:

(



x
0

+


1
2



len


(

P
0

)




,


y
0

+


1
2



len


(

P
0

)





)








d
03

:

(



x
0

+


1
2



len


(

P
0

)




,


y
0

-


1
2



len


(

P
0

)





)








d
04

:

(



x
0

-


1
2



len


(

P
0

)




,


y
0

-


1
2



len


(

P
0

)





)





To realize proliferation of random blocks from custom character0, four square images (generated based on the same rule as the random block custom character0) respectively centered at the four vertices d01, d02, d03, and d04 of the block custom character0 are randomly captured, to generate new random blocks custom characteri, i=1, 2, 3, 4. Likewise, the four vertices of each newly generated random block are named as di1, di2, di3, di4, i=1, 2, 3, 4. The foregoing process is repeated for custom character times, till these captured random blocks custom characteri reach the edges of the image (if one of the random blocks reach an edge of the image, proliferation based on this random block is stopped), thus guaranteeing that the random blocks are spread all over the whole high-resolution remote sensing image custom character.


After custom character proliferations (custom character is an integer), the total number of the random blocks reaches numcustom character which is calculated as follows:







num
p

=

{



1



N
=
0






1
+




i
=
1

N







4
N






N
>
0




}





In order that a combination of all the random blocks can cover all pixels of the remote sensing image, the side length of each random block is limited as follows:


The side length of each square is limited in a range of 512≤len(·)≤1024.


Step 102: Extract mono-spectral feature data in each band from each random block. The random block and its neighboring random blocks are all composed of multiple bands. Because specific ground objects have different sensitivities to different bands, it is required to perform extraction for these bands separately, to acquire multiple pieces of mono-spectral feature data from the random block and mono-spectral feature data from its neighboring random blocks. Multispectral feature data is extracted from the random block and its neighboring random blocks. Because the high-resolution remote sensing image is composed of multiple bands and specific ground objects have different sensitivities to the different bands, it is required to perform extraction for these bands separately. Generally, a remote sensing image is composed of four spectral bands, which are a blue band from 0.45 μm to 0.52 μm, a green band from 0.52 μm to 0.60 μm, a red band from 0.62 μm to 0.69 μm, and a near-infrared band from 0.76 μm to 0.96 μm. The remote sensing image is usually represented as four-channel mono-spectral feature data by a computer, and these several bands may be read directly by using the Geospatial Data Abstraction Library (GDAL) in python.


Step 103: Perform semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block.


As shown in FIG. 3, Cony denotes a convolutional layer, Pooling denotes downsampling by a pooling layer, upsampling denotes an upsampling layer, Bi denotes a bilinear interpolation operation, denotes hcustom characterim an output of a feature map custom characterim at a hidden layer in the RNN network, hcustom characteri denotes an output of a feature map custom characteri at the hidden layer in the RNN network, ycustom characterim denotes an output of hcustom characterim, ycustom characteri denotes an output of denote transposed matrices, custom characteri denotes an advanced abstract feature generated after a random block custom characteri is processed by an encoder En(·), and custom characterim denotes an advanced abstract feature generated after one neighboring random block custom characterim of the random block custom characteri is processed by the encoder En(·), where m is a subscript. As shown in FIG. 3, the supervised semantic segmentation network includes an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module. The encoder, the RNN network, and the decoder are successively connected. The first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.


The step 103 of performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block specifically includes the following process:


Abstract features are extracted from mono-spectral feature data in the jth bands of the ith random block pi and a random block pim neighboring the ith random block pi by using the encoder and according to a formula







F
i
j

=

En


(

P
i
j

)










F
im
j

=

En


(

P
im
j

)



,

m
=
1

,
2
,
3
,
4
,





to obtain abstract feature maps regarding the jth bands of the ith random block pi and the random block pim neighboring the ith random block pi, where Fij denotes an abstract feature map regarding the jth band of the random block pi, En(□), denotes the encoder, and Fimj denotes an abstract feature map regarding the jth band of the mth random block pim neighboring the random block pi. Specifically, by using the random block pi as an image unit, the neighborhood of the random block pi covers four random blocks randomly captured at the four vertices di1, di2, di3, di4 of custom characteri, which are denoted as custom characteri1, custom characteri2, custom characteri3, custom characteri4 herein for convenience. The four random blocks are nearest to the random block pi, and there are overlapping image regions. Therefore, they are highly correlated in content. Based on a dependence relationship between images, semantic segmentation subnetworks can output semantic segmentation probability plots with the same size as an input image, for ease of fusion.


To achieve a semantic segmentation function, certain embodiments herein use a typical framework U-Net for semantic segmentation. First, advanced abstract features are extracted from the image by using the encoder.


Afterwards, Fimj, (m=1, 2, 3, 4) and Fij are sequentially input into the RNN network, to establish a dependence relationship between the four neighboring random blocks and the random block pi. Based on the abstract feature maps regarding the jth bands of the ith random block pi and the random block pim neighboring the ith random block pi, neighborhood association is established between abstract feature maps regarding the jth bands of the ith random block pi and four random blocks neighboring the ith random block pi via the RNN network and by using the formula







h

F
im
j


=

{





ϕ


(


uF
im
j

+
b

)


,




m
=
1







ϕ


(


uF
im
j

+

wh

F

i


(

m
-
1

)


j


+
b

)


,





m
=
2

,
3
,
4




}








h

F
i
j


=

ϕ


(


uF
i
j

+

wh

F
i4
j


+
b

)









y

F
im
j


=

σ


(


Vh

F
im
j


+
c

)










y

F
i
j


=

σ


(


Vh

F
i
j


+
c

)



,





to obtain abstract features of the jth band of the ith random block pi after the neighborhood association, where hFimj, hFi(m-1)j, and hFi4j respectively denote outputs of abstract feature maps Fimj regarding the jth bands of the mth random block pim, the (m−1)th random block pi(m-1), and the fourth random block pi4 which neighbor the ith random block pi at the hidden layer in the RNN network; hFij denotes an output of the abstract feature map Fij regarding the jth band of the ith random block pi at the hidden layer in the RNN network; yFimj abstract features of the jth band of the mth random block pim neighboring the ith random block pi after the neighborhood association; yFij denotes the abstract features of the jth band of the ith random block pi after the neighborhood association; ϕ(·) denotes a first nonlinear function and σ(·) denotes a second nonlinear function; u denotes a first transposed matrix, v denotes a second transposed matrix, and w denotes a third transposed matrix; and b denotes a first bias term and c denotes a second bias term. The abstract features of the jth band of the ith random block that are obtained after the neighborhood association are decoded with the decoder and by using a formula pij=De(yFij), to obtain a mono-spectral semantic segmentation probability plot pij regarding the jth band of the ith random block pi.


Feature data output by the encoder, feature data output by the RNN network, and feature data output by the decoder are supervised respectively with the first, second, and third supervision modules. Specifically, in order to improve the performance of semantic segmentation, classification are made pixel by pixel and upsampling is performed subsequently by means of a bilinear interpolation to restore the image to the original size, which will be performed respectively in a convolutional layer in the last layer of the encoder, in the first layer of the decoder and in the second layer of the decoder; and finally, a cross-entropy loss function is used to evaluate the performance of the encoder, the RNN network, and the decoder, thus monitoring the network from these three aspects. The calculation equation is as follows:

ypre=custom characteri(conv1(custom character))
custom character=−Σytrue log ypre+(1−ytrue)log(1−ypre)


where ypre denotes a predicted probability, which is a semantic segmentation probability plot, regarding output features custom character of a supervised layer after processing by the convolutional layer and a bilinear interpolation layer; conv1(·) denotes a convolution for classification; custom character(·) denotes a bilinear interpolation operation; and custom character denotes a loss difference between the predicated probability ypre calculated by using the cross-entropy loss function and a true label ytrue.


A weight training network is constructed for the plurality of mono-spectral semantic segmentation probability plots. As shown in FIG. 4, the weight training network includes a plurality of parallel supervised semantic segmentation networks and a convolution fusion module. In the mode of multiple parallel semantic segmentation subnetworks, the weight training network trains each semantic segmentation subnetwork separately by using the mono-spectral feature data of each random block and its neighboring random blocks, to obtain semantic segmentation probability plots of the mono-spectral feature data. Finally, the semantic segmentation probability plots of multiple pieces of mono-spectral data of the random blocks are fused by using a convolutional layer, to obtain a fused probability plot.


A spectral image with different spectrums shows different sensitivities to different ground objects, and therefore weight training can be performed according to an identified target. Specifically, based on the multiple pieces of mono-spectral feature data of the random block, weight training is performed on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain a trained weight. By continuously inputting mono-spectral feature data of new random blocks and their neighboring random blocks, outputs from an input layer to the hidden layer and from the hidden layer to an output layer are calculated by means of forward propagation, and the network is optimized by means of back propagation, such that weight parameters in the weight training network are continuously updated till convergence.


Step 104: Fuse mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.


After the mono-spectral feature data in each band of each random block is processed by the semantic segmentation subnetworks, a semantic segmentation probability plot pij is generated. These semantic segmentation probability plots are fused to obtain a fused semantic segmentation probability plot, which may be specifically expressed as follows:

out=conv2(pi1, . . . ,pij),j=1,2, . . . max(j)


where out denotes the fused semantic segmentation probability plot, conv2 denotes an operation of using a convolutional layer for spectral fusion, and max (j) denotes the number of bands contained in the high-resolution remote sensing image.


Road information, bridge information, and the like to be detected are acquired according to the fused semantic segmentation probability plot from the high-resolution remote sensing image.


Certain embodiments herein further provide a semantic segmentation system for a high-resolution remote sensing image based on random blocks, where the semantic segmentation system includes the following modules:


An image partitioning module is configured to partition a high-resolution remote sensing image into a plurality of random blocks.


The image partitioning module specifically includes: a pixel point selection submodule, configured to randomly select a pixel point d0 in a central area of the high-resolution remote sensing image; a first image partitioning submodule, configured to crop a square from the high-resolution remote sensing image to obtain a random block p0, where the square is centered at the pixel point d0 and has a randomly generated side length of len(p0); a second image partitioning submodule, configured to further crop squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0, where the side length of each square is in a range of 512≤len(·)≤1024; and a third image partitioning submodule, configured to repeat the step of “further cropping squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.


A mono-spectral feature data extraction module is configured to extract mono-spectral feature data in each band from each random block.


A semantic segmentation module is configured to: perform semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block.


The supervised semantic segmentation network includes an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module. The encoder, the RNN network, and the decoder are successively connected. The first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.


The semantic segmentation module specifically includes: an encoding submodule, configured to extract abstract features from mono-spectral feature data in the jth bands of the ith random block pi and a random block pim neighboring the ith random block pi by using the encoder and according to a formula







F
i
j

=

En


(

P
i
j

)










F
im
j

=

En


(

P
im
j

)



,

m
=
1

,
2
,
3
,
4
,





to obtain abstract feature maps regarding the jth bands of the ith random block pi and the random block pim neighboring the ith random block pi, where Fij denotes an abstract feature map regarding the jth band of the random block pi, En(□) denotes the encoder, and Fimj denotes an abstract feature map regarding the jth band of the mth random block pim neighboring the random block pi; a neighborhood feature association submodule, configured to: based on the abstract feature maps regarding the jth bands of the ith random block pi and the random block pim neighboring the ith random block pi, establish neighborhood association between abstract feature maps regarding the jth bands of the ith random block pi and four random blocks neighboring the ith random block pi via the RNN network and by using the formula







h

F
im
j


=

{





ϕ


(


uF
im
j

+
b

)


,




m
=
1







ϕ


(


uF
im
j

+

wh

F

i


(

m
-
1

)


j


+
b

)


,





m
=
2

,
3
,
4




}








h

F
i
j


=

ϕ


(


uF
i
j

+

wh

F
i4
j


+
b

)









y

F
im
j


=

σ


(


Vh

F
im
j


+
c

)










y

F
i
j


=

σ


(


Vh

F
i
j


+
c

)



,





to obtain abstract features of the jth band of the ith random block pi after the neighborhood association, where hFimj, hFi(m-1)j, and hFi4j respectively denote outputs of abstract feature maps Fimj regarding the jth bands of the mth random block pim, the (m−1)th random block pi(m−1), and the fourth random block pi4 which neighbor the ith random block pi at the hidden layer in the RNN network; hFij denotes an output of the abstract feature map Fij regarding the jth band of the ith random block pi at the hidden layer in the RNN network; yFimj abstract features of the jth band of the mth random block pim neighboring the ith random block pi after the neighborhood association; yFij denotes the abstract features of the jth band of the ith random block pi after the neighborhood association; ϕ(·) denotes a first nonlinear function and σ(·) denotes a second nonlinear function; u denotes a first transposed matrix, v denotes a second transposed matrix, and w denotes a third transposed matrix; and b denotes a first bias term and c denotes a second bias term; a decoding submodule, configured to decode, with the decoder and by using a formula pij=De(yFij), the abstract features of the jth band of the ith random block that are obtained after the neighborhood association, to obtain a mono-spectral semantic segmentation probability plot pij regarding the jth band of the ith random block pi; and a supervision submodule, configured to supervise feature data output by the encoder, feature data output by the RNN network, and feature data output by the decoder respectively with the first, second, and third supervision modules.


A fusion module is configured to fuse mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.


The semantic segmentation system further includes: a weight training network construction module, configured to construct a weight training network for the plurality of mono-spectral semantic segmentation probability plots, where the weight training network includes a plurality of parallel supervised semantic segmentation networks and a convolution fusion module; and a weight training module, configured to: based on the multiple pieces of mono-spectral feature data of the random block, perform weight training on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain the trained weight.


The technical solutions of certain embodiments herein achieve the following advantages: Because a high-resolution remote sensing image is multispectral and has a large data amount, certain embodiments herein partition the remote sensing image into a plurality of small sections by means of random blocks, and further achieves data enhancement. Moreover, the remote sensing image with different spectrums shows different sensitivities to different ground objects, and therefore, the convolutional layer used in certain embodiments herein are equivalent to subjecting a predicted image with different spectrums to weighted summation. Certain embodiments herein divide the high-resolution remote sensing image into random blocks, and performs semantic segmentation for each individual random block separately, thus avoiding overflow of GPU memory during semantic segmentation of the high-resolution remote sensing image. In addition, certain embodiments herein incorporate feature data in random blocks neighboring each random block in the process of semantic segmentation, overcoming the technical shortcoming that the existing segmentation method for the remote sensing image weakens the correlation within the image. Moreover, certain embodiments herein perform semantic segmentation separately on mono-spectral feature data in each band of the high-resolution remote sensing image, so that objects with the same or similar colors can be accurately identified according to the characteristic that different ground objects have different sensitivities to light with different spectrums, thus enhancing the accuracy of sematic segmentation of the high-resolution remote sensing image.


Each embodiment of the present specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and identical or similar parts of the embodiments may be obtained with reference to each other.


The principles and implementations of the present disclosure have been described with reference to specific embodiments. The description of the above examples is only for facilitating understanding of the method and the core idea of the present disclosure, and the described embodiments are only a part of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without departing from the inventive scope are the scope of the present disclosure.

Claims
  • 1. A semantic segmentation method for a high-resolution remote sensing image based on random blocks, comprising: partitioning a high-resolution remote sensing image into a plurality of random blocks;extracting mono-spectral feature data in each band from each random block;performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block; andfusing mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.
  • 2. The semantic segmentation method for a high-resolution remote sensing image based on random blocks according to claim 1, wherein the partitioning the high-resolution remote sensing image into random blocks, to obtain a plurality of random blocks specifically comprises: randomly selecting a pixel point d0 in a central area of the high-resolution remote sensing image;cropping a square from the high-resolution remote sensing image to obtain a random block p0, wherein the square is centered at the pixel point d0 and has a randomly generated side length of len(p0);further cropping squares which are respectively centered at the four vertices d01, d02, d03, and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03 and p04 which neighbor the random block p0, wherein the side length of each square is in a range of 512≤len(·)≤1024; andrepeating the step of “further cropping squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.
  • 3. The semantic segmentation method for a high-resolution remote sensing image based on random blocks according to claim 1, wherein the supervised semantic segmentation network comprises an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module; the encoder, the RNN network, and the decoder are successively connected; andthe first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.
  • 4. The semantic segmentation method for a high-resolution remote sensing image based on random blocks according to claim 3, wherein the performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block specifically comprises: extracting abstract features from mono-spectral feature data in the jth bands of the ith random block pi and a random block pim neighboring the ith random block pi by using the encoder and according to a formula
  • 5. The semantic segmentation method for a high-resolution remote sensing image based on random blocks according to claim 1, wherein before the step of fusing mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block, the method further comprises: constructing a weight training network for the plurality of mono-spectral semantic segmentation probability plots, wherein the weight training network comprises a plurality of parallel supervised semantic segmentation networks and a convolution fusion module; andbased on the multiple pieces of mono-spectral feature data of the random block, performing weight training on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain a trained weight.
  • 6. A semantic segmentation system for a high-resolution remote sensing image based on random blocks, comprising: an image partitioning module, configured to partition a high-resolution remote sensing image into a plurality of random blocks;a mono-spectral feature data extraction module, configured to extract mono-spectral feature data in each band from each random block;a semantic segmentation module, configured to: perform semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block; anda fusion module, configured to fuse mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.
  • 7. The semantic segmentation system for a high-resolution remote sensing image based on random blocks according to claim 6, wherein the image partitioning module specifically comprises: a pixel point selection submodule, configured to randomly select a pixel point d0 in a central area of the high-resolution remote sensing image;a first image partitioning submodule, configured to crop a square from the high-resolution remote sensing image to obtain a random block p0, wherein the square is centered at the pixel point d0 and has a randomly generated side length of len(p0);a second image partitioning submodule, configured to further crop squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0, wherein the side length of each square is in a range of 512≤len(·)≤1024; anda third image partitioning submodule, configured to repeat the step of “further cropping squares which are respectively centered at the four vertices d01, d02, d03 and d04 of the random block p0 and have randomly generated side lengths len(p01), len(p02), len(p03), len(p04) from the high-resolution remote sensing image, to generate random blocks p01, p02, p03, and p04 which neighbor the random block p0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.
  • 8. The semantic segmentation system for a high-resolution remote sensing image based on random blocks according to claim 6, wherein the supervised semantic segmentation network comprises an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module; the encoder, the RNN network, and the decoder are successively connected; andthe first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.
  • 9. The semantic segmentation system for a high-resolution remote sensing image based on random blocks according to claim 8, wherein the semantic segmentation module specifically comprises: an encoding submodule, configured to extract abstract features from mono-spectral feature data in the jth bands of the ith random block pi and a random block pim neighboring the ith random block pi by using the encoder and according to a formula
  • 10. The semantic segmentation system for a high-resolution remote sensing image based on random blocks according to claim 6, further comprising: a weight training network construction module, configured to construct a weight training network for the plurality of mono-spectral semantic segmentation probability plots, wherein the weight training network comprises a plurality of parallel supervised semantic segmentation networks and a convolution fusion module; anda weight training module, configured to: based on the multiple pieces of mono-spectral feature data of the random block, perform weight training on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain the trained weight.
Priority Claims (1)
Number Date Country Kind
202010708331.6 Jul 2020 CN national
US Referenced Citations (25)
Number Name Date Kind
10067509 Wang Sep 2018 B1
10147193 Huang Dec 2018 B2
10452947 Ahmed Oct 2019 B1
10671083 Zhu Jun 2020 B2
10671873 Wang Jun 2020 B2
10984225 Ghosh Apr 2021 B1
20130163829 Kim Jun 2013 A1
20150140522 Bose May 2015 A1
20150268058 Samarasekera Sep 2015 A1
20170200260 Bhaskar Jul 2017 A1
20180218497 Golden Aug 2018 A1
20190236342 Madden Aug 2019 A1
20190304098 Chen Oct 2019 A1
20190347498 Herman Nov 2019 A1
20190355103 Baek Nov 2019 A1
20190384963 Kim Dec 2019 A1
20200005074 Wang Jan 2020 A1
20200026928 Rhodes Jan 2020 A1
20200117906 Lee Apr 2020 A1
20200151497 Kojima May 2020 A1
20200218961 Kanazawa Jul 2020 A1
20200327309 Cheng Oct 2020 A1
20200349711 Duke Nov 2020 A1
20210073959 Elmalem Mar 2021 A1
20210118112 Huang Apr 2021 A1
Non-Patent Literature Citations (4)
Entry
Aytaylan et al. “Semantic Segmentation of Hyperspectral Images with the Fusion of Lidar Data;” IEEE 2016 IGARSS pp. 1-4.
Huang et al. “Weakly-supervised Semantic Segmentation in Cityscape via Hyperspectral Image” Computer Vision and Pattern Recognition (cs.CV) Dec. 18, 2020.
Lin et al. “RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Year: 2017, vol. 1, pp. 5168-5177.
Zhao et al. “ICNetfor Real-Time Semantic Segmentation on High-Resolution Images” European Conference on Computer Vision ECCV 2018: Computer Vision—ECCV 2018 pp. 418-434.