IMAGE RESTORATION METHOD BASED ON PHYSICAL SCATTERING MODEL

Information

  • Patent Application
  • 20240046421
  • Publication Number
    20240046421
  • Date Filed
    January 17, 2022
    2 years ago
  • Date Published
    February 08, 2024
    3 months ago
  • Inventors
    • TIAN; Jiandong
    • ZHOU; Shijun
    • ZHANG; Zhen
Abstract
A new image formation model for descattering different from the previous works based on atmospheric scattering model is established. It can provide a corresponding physical explanation for each scattering environment, illustrate a relationship between a light source position and scattering conditions and solve the problem of scattering removal under non-uniform illumination. Firstly, a dark angle of a camera is removed by an integrating sphere; then, a pure scattering part is found by an improved dark channel-like prior method; and finally, a zenith angle, an azimuth angle, an extinction coefficient and other related physical quantities are calculated according to the light attenuation and backscattering distribution of a scattering medium to eliminate the backscattering and light attenuation. Theoretically, the model can be used in any uniform scattering medium, such as underwater and fog environments, and works well in high-density media.
Description
TECHNICAL FIELD

The present invention relates to a scattering model, and particularly to an image restoration method based on physical parameters, which explains a relationship between a light source position and backscattering.


BACKGROUND

In recent years, advances in technology have allowed people to collect images and videos in strong scattering environments. However, in practical engineering environments, removing backscattering has been an ongoing problem. Backscattering causes color distortion and blurring in the image. This severely affects the performance of standard computer vision algorithms, because generally the algorithms are developed to work in clear air. Therefore, in order to obtain reasonable results by the algorithms, the image must first be de-scattered. The well-known atmospheric scattering model is based on the Beer-Lambert law:






I(x)=J(x)t(x)+A(1−t(x))  (1)






t(x)=e−σz  (2)

    • where atmospheric light A depends on a depth map, t(x) is related to an attenuation coefficient σ and depth z, the input image is I(x), and the clear image is J(x). The model is relatively simple and can remove scattering from air and water by only a few parameters. This also brings some disadvantages, i.e., all complex situations cannot be covered.


When the midday light shines from south, the zenith angle is close to the minimum value of the day, and the overall brightness of the image is brighter than other time points. The zenith angle is the angle between the incident direction of the light and the zenith direction, and the variation period can be a whole year. When shooting to a light source, the image brightness on a photo is symmetrical about the middle axis. Similarly, when the shooting angle and the lighting angle are 90°, the image brightness is monotonous with respect to the long axis of the image. In an image, a part farther from the light source has low brightness. Similarly, the farther the lens is from the light source, the lower the overall brightness of the image is. When underwater, this is reflected in the distance of the water surface from the lens. Camera settings, such as lens aperture, focal length, shutter speed and the like, also affect the overall brightness and pixel distribution of the image. These light-induced phenomena affect the removal of backscatter. Thus, it is necessary and physical to consider all these factors.


It can be seen from the above that the change of the position of the light source will cause the change of the pixel value of the image and the change of the backscattering distribution, while the atmospheric scattering model cannot consider the information of the light source. Considering the relationship between the position of the light source and the scattering is the core idea of the method proposed by the present invention.


SUMMARY

In view of the above technical deficiencies, the present invention provides an image restoration method based on a physical parameter scattering model. 1) A new model based on physical parameters is proposed to eliminate the problem of image degradation caused by backscatter and light attenuation of images captured in scattering media. 2) The limitations of the previous works based on the well-known atmospheric scattering model are discovered, and the problem of image contrast loss caused by non-uniform illumination in scattering media is solved by proposing a completely different model method. The relationship between the atmospheric scattering environment and the position of the light source is explained, and the model can theoretically be used for images captured in any homogeneous scattering medium (such as underwater and smoky environments). 3) The model is especially effective when solving images with high-density scattering problems.


The technical solution adopted by the present invention to solve the technical problems is: an image restoration method based on a physical scattering model comprises: establishing a model mapping relationship between a physical position of a light source and scattering, and dividing an attenuated image into a backscatter component and a direct radiation component for calculation; taking image sampling points and approximating a most real physical parameter through multiple fittings; and inversely substituting the calculated physical parameter into a model to calculate an attenuation value and backscattering of the entire image, to restore a clear image.


The method comprises the following steps:

    • step 1. calibrating the vignetting of a camera by an integrating sphere for the attenuated image;
    • step 2. converting image pixel coordinates into coordinates in a world coordinate system;
    • step 3. establishing a de-scattering mathematical model, dividing the attenuated image into two parts of direct radiance and backscatter, and deriving the model mapping relationship respectively;
    • step 4. sampling a corrected image from which vignetting is removed, conducting OSTU binarization processing, and obtaining a partial pixel region of pure scattering for fitting and parameter calculation;
    • step 5. removing backscattering and attenuation from the entire image by the calculated parameters.


Calibrating the vignetting of a camera by an integrating sphere comprises:

    • establishing a vignetting table;
    • taking an original attenuated image by the integrating sphere, obtaining a correction factor at each pixel position, and using the coefficient to eliminate vignetting for the original attenuated image pixel by pixel by a look-up table method to obtain the corrected image.


A direct radiation item can be expressed by the following formula:








I
d
c

(
x
)

=



J
c

(
x
)



e

-


σ
c

[



(


Y
u

-


y
f


Z


)



sec
(

θ
Z

)


+
r

]








A backscattering item can be expressed by the following formula:









L
b

c

(

x
,
y

)

=


E
c



e


-

σ
c




Y
u



sec
(

θ
Z

)





β
c



P

(

g
,
α

)


η




0

Z
s




e


-


σ
c

[



-

y
f




sec
(

θ
Z

)


+
η

]



Z



dZ







The clear image is represented by the following formula:








J
c

(
x
)

=




I
c

(
x
)

-



L
b

c

(

x
,
y

)



e

-


σ
c

[



(


Y
u

-


y
f


Z


)



sec
(

θ
Z

)


+
r

]










    • where Ic(x) is an input image to be restored, Lbc(x, y) is the backscattering item, and









e

-


σ
c

[



(


Y
u

-


y
f


Z


)



sec
(

θ
Z

)


+
r

]








    •  is the direct radiation item.





Obtaining a partial pixel region of pure scattering comprises extracting pixels that only contain scattering components and have similar depths for estimating physical parameters:

    • a. converting the input image from a RGB space to an HSV space, and extracting a saturation channel and an intensity channel;
    • b. calculating an optimal global threshold by OSTU maximum between-class variance method to obtain image regions with high intensity and saturation;
    • c. further sampling at intervals in x and y directions of the image to generate a point set V;
    • d. applying an improved DCP-like rule within the point set V of the image:









J
c

(
x
)

=


min

y


Ω

(
x
)



(


I
c

(
x
)

)


,

x

V







    • obtaining the image region of Jc(x)≈0.





The physical parameters comprise the zenith angle of the sun, the azimuth angle, the focal length of the camera, illuminance, extinction coefficient, depth from an object to the camera, an anisotropy constant, and a distance from the object in water to a water surface.


During fitting and parameter calculation, since the influences of different parameters on functions are different, fitting and solving are performed in sequence according to the fitting priority of the parameters.


The method is used for restoring images under non-uniform illumination according to the relationship between the light source position and scattering distribution.


The present invention has the following beneficial effects and advantages:


1. The method of the present invention establishes a new image formation model for de-scattering, which is different from the current popular image enhancement method. The model reversely induces the physical parameters of a shooting scene according to the backscatter situation of the image, makes a physical explanation for the process of backscatter removal, and solves the problem by solving the inverse process of the physical method, making the image restoration effect more natural.


2. The method of the present invention innovatively explains the relationship between the position of the light source and the backscatter, and solves the problem of image de-scattering under non-uniform illumination.


3. The method of the present invention is superior to the existing algorithm in de-scattering, and the performance is stable, and especially when the image has strong scattering, the effect is more prominent.





DESCRIPTION OF DRAWINGS


FIG. 1 is a flow chart of an image restoration method of a physical model for physical parameter scattering;



FIGS. 2A-2H show images shot with pure scattering and without object;



FIGS. 3A-3C show the cause of non-uniform illumination;



FIG. 4 is a zenith angle and an azimuth angle of a light source;



FIG. 5 shows relevant calculation parameters of backscattering;



FIG. 6 is an experimental environment;



FIGS. 7A-7F show change situations of pixel values about the change of θA;



FIG. 8 shows change situations of pixel values about the change of θZ;



FIGS. 9A-9B show change situations of pixel values about the change of g, σ;



FIG. 10 is a change situation of a pixel value about the change of f;



FIG. 11 shows change situations of pixel values about the change of Zt, M, Yu;



FIGS. 12A-12B are a restored image of a high scattering image corresponding to Table 2; and



FIG. 13 shows comparative experiment results of the present invention and other methods.





DETAILED DESCRIPTION

The present invention will be further described in detail below in combination with the embodiments. The method steps are described with reference to the drawings.


A scattering model based on physical parameters comprises the following steps:

    • 1. calibrating camera vignetting and removing image noise by using a uniform light source (such as an integrating sphere);
    • 2. establishing a mathematical model of de-scattering, dividing the attenuated image into two parts: direct radiance and backscatter, and calculating the two parts respectively;
    • 3. performing point sampling on the collected images, and performing partial pixel fitting and parameter calculation for pure scattering;
    • 4. removing backscattering and attenuation from the entire image by using the calculated parameters.


Calibrating vignetting comprises:


Establishing a de-vignetting table corresponding to settings including all focal lengths of the camera; and using a look-up table method for the image to be processed, and using the corresponding de-vignetting table to remove vignetting. Vignetting is a position-dependent illumination attenuation phenomenon that is unavoidable for all camera lenses. In general, vignetting is caused when a light beam passing through an aperture from an object point on an optical axis is larger than an imaging light beam of an off-axis object point. Vignetting removal cannot be verified in practice by the ideal fourth cosine law, so there are many approximation work for similar curves and improvement work on the fourth cosine law. Herein, a look-up table (LUT) method is used to eliminate vignetting. This operation should be done in a dark room, and a uniform light source with low specular reflection and known input intensity is arranged. An integrating sphere with a diameter of 1000 mm is used, with built-in 12 lamps, and the diameter of a light outlet is 100 mm to ensure that the light source is uniform. The correction factor at each pixel position is:











I
LUT

(

i
,
j

)

=


I

ref
,
max




I
ref

(

i
,
j

)






(
3
)









    • where Iref(i, j) represents a pixel value at position (i, j); Iref,max represents the maximum value of Iref(i, j); and ILUT(i, j) represents the correction factor at coordinate (i, j). The correction factors at all positions are calculated and combined into the de-vignetting table. Then, images shot with the same camera settings can be corrected by using the look-up table method:









I′(i,j)=I(i,j)ILUT(i,j)  (4)

    • where I(i, j) represents the input intensity value at pixel (i, j), and I′(i, j) represents the output intensity value after vignetting correction of the pixel.


Backscatter is an optical effect caused by the random scattering of light directly through the suspended particles in the scattering medium to reach the camera lens without being reflected by an object. The direct radiation component is an optical effect caused by light reflected by the surface of the object and then scattered by suspended particles in the scattering medium to reach the camera lens.


The de-scattering mathematical model comprises:


1. Overview of Non-Uniform Illumination



FIGS. 2A and 2E show two pure scattering object-free images taken underwater. In terms of atmospheric light scattering, the curve shapes of the two lines marked should tend to be constants. However, as shown in FIGS. 2A to 2H, the curve shapes of the marked pixel values are strongly related to the position of the light source. For vertical lines FIGS. 2D and 2 H as marked in FIGS. 2B and 2F, the pixel values are decreased as the ordinate is decreased because the light source is farther from the position with the larger ordinate. For horizontal lines FIGS. 2C and 2G as marked in FIGS. 2B and 2F, the conditions are more complicated. We found that when an angle between a shooting direction on the earth plane and a light source direction is kπ(k=1, 2, 3 . . . ), the shape of the curve remains symmetrical about a central axis. When the angle between the shooting direction on the earth plane and the light source direction is kπ+π/2 (k=1, 2, 3 . . . ), the curve shape remains monotonically increasing or monotonically decreasing. When the shooting angle is between the above two situations, the pixel value of the curve will have a maximum value or a minimum value. The above problem can be simply summarized as the closer to the light source is, the higher the illuminance is and the larger the pixel value is. In addition, we also found that since light decays faster in denser media, this phenomenon becomes more severe as the turbidity of the medium increases. These problems are non-negligible, and problems may occur when depth estimation and image restoration are performed in denser media.


As shown in FIGS. 3A-3C, light propagation results in nonuniformity of illumination and intensity. FIG. 3A shows that the light received by the camera has two parts: the direct attenuation part of the object and the scattering part of the light source. However, the image formation models based on the atmospheric light scattering model do not consider light transmission from the light source to the object. As shown in FIG. 3B, light D2 travels a longer distance than light D1, which means that light D2 attenuates more severely. The atmospheric light scattering model also assumes that the scattered light is parallel and travels equal distances to the scattering point. However, as shown in FIG. 3C, the scattered rays from the light source to the scattering points (L1 and L2) are also different at different camera viewing angles (O1 and O2). Two real factors reflected in FIGS. 3B and 3C cause that the illumination is actually uneven.


2. Model Establishment


A charge-coupled device (CCD) of the camera is used as the origin of a world coordinate system. x axis and y axis on the image plane are parallel to X axis and Y axis of the world coordinate system, and Z axis is aligned with the optical axis of the camera. The point x=(x0, y0) on the image plane is made to correspond to the point X=(X0, Y0, Z0) on the surface of the object, and the relationship therebetween is








(


X
0

,

Y
0

,

Z
0


)

=

(




Z
0

f



x
0


,



Z
0

f



y
0


,

Z
0


)


,




where f represents the focal length of the camera.


Then, for a point (x, y) on an image coordinate system, its corresponding coordinates on the world coordinate system can be expressed as:










(

X
,
Y
,
Z

)

=

[





(

x
-

x
0


)



w
x


f

·
Z

,




(

y
-

y
0


)



w
y


f

·
Z

,
Z

]





(
5
)









    • where x0 and y0 are the center coordinates of the image, wx and wy are the physical dimensions of the pixel calculated according to the CCD width, and f is the focal length.





A single scattering model is used, and forward scattering is ignored. Thus, the radiation amount which reaches the camera can be divided into direct radiation and backscatter:






I
c(x)=Idc(x)+Ibc(x),c⊂{R,G,B}  (6)

    • where Ib is the backscatter, and Id is the direct radiation component. Therefore, the backscatter intensity and attenuation for each pixel are accurately calculated to obtain a clear image.


According to the Beer-Lambert law, light is considered to decay with a distance from the exiting point of light to a lens, and the propagation of light is related to an extinction coefficient. The extinction coefficient varies based on the wavelength of light. In water, visible light decays at a longest wavelength and finally appears blue to the human eye; and red light tends to decay faster than green and blue light, so the extinction coefficients for different channels should not be the same. In addition, when sunlight is illuminated from a distance, the scattered part in each direction should also be considered, which is determined by a phase function and a scattering coefficient.


3. Direct Radiation Component


For a point x=(x0, y0) on the image plane, it is assumed that this point corresponds to the coordinate X=(X0, Y0, Z0) on the object in the world coordinate system. Then for this point, the direct radiation component is:










E
c

=


I
0



[

τ


D

(
X
)


]

2






(
7
)









    • where ∥D(X)∥ represents a distance the light travels to the object, I0 represents the radiation intensity of the light source before entering the scattering medium, and c represents a color channel. The distance ∥D(X)∥ comprises a distance ∥τD(X)∥(0≤τ≤1) in water and a distance ∥1−τD(X)∥ in air. After that, the light is reflected by the object with a reflectivity of ρc(X), and reaches the camera lens through a distance ∥X∥. After all the processes, the direct irradiance reaching the camera lens can be written as:









I
d(x)=Ece−σ(∥τD(X)∥+∥X∥)  (8)

    • Yu is set as a distance from the camera to the horizontal plane. As shown in FIG. 5, considering a zenith angle θZ shown in FIG. 4, the propagation distance of light before being reflected by an object in water is equal to:










τ


D

(
X
)


=


(


Y
u

-


y
f


Z


)



sec

(

θ
Z

)






(
9
)









    • r(x, y) is made to be equal to the distance ∥X∥ from the object to the camera lens, to obtain:













r

(

x
,
y

)

=




(


x
f

·
Z

)

2

+


(


y
f

·
Z

)

2

+

Z
2







(
10
)














L
d
c

(

x
,
y

)

=


E
c



e

-

σ
[



(


Y
u

-


y
f


Z


)



sec
(

θ
Z

)


+
r

]






ρ
c

(

x
,
y

)



e


-
σ



r

(

x
,
y

)








(
11
)







Finally, the direct radiation component can be represented by physical parameters:











I
d
c

(
x
)

=



J
c

(
x
)



e

-


σ
c

[



(


Y
u

-


y
f


Z


)



sec
(

θ
Z

)


+
r

]








(
12
)









    • where Jc(x) represents a clear image, and r represents a distance from a pixel r(x, y) to the origin.





When the light source is in a scattering medium, Yu refers to a distance from the object to the light source; otherwise, Yu refers to a distance of the object to the horizontal plane or to the top of the atmosphere.


4. Backscattering Component


Backscatter is an optical response caused by that the light is not reflected by the surface of the object and is only scattered to the camera lens by a medium and then subjected to small-angle random scattering by suspended particles in water. A phase function is used to describe the scattering degree of light in a particular direction, and a scattering coefficient β is used to represent the scattering degree of light. In terms of the phase function, the most classic is the Heyney-Greenstein phase function, which can be represented by two parameters of molecular anisotropy g and the scattering angle α, and the relative amounts of forward scattering and backscattering are described by g∈(−1,1):










P

(

g
,
α

)

=


1

4

π


·


1
-

g
2




[

1
+

g
2

-

2

g

cos

α


]


3
2








(
13
)









    • where the value range of the scattering angle α is [0, π], which is the angle between the direction of an object point and the direction of the light source:













cos

α

=

u
·

X
^






(
14
)












X
=

(



Z
f


x

,


Z
f


y

,
Z

)





(
15
)












u
=

[


sin

(

θ
Z

)



sin

(

θ
A

)



cos

(

θ
Z

)



sin

(

θ
Z

)



cos

(

θ
A

)


]





(
16
)









    • where θZ represents a zenith angle and θA represents an azimuth angle.





According to the principle u·X=∥u∥∥X∥cos α, the scattering angle can be expressed as a real physical quantity:










cos

(
α
)

=





sin

(

θ
Z

)



sin

(

θ
A

)


X

+


cos

(

θ
Z

)


Y

+


sin

(

θ
Z

)



cos

(

θ
A

)


Z





X
2

+

Y
2

+

Z
2




=




sin

(

θ
Z

)



sin

(

θ
A

)


x

+


cos

(

θ
Z

)


y

+


sin

(

θ
Z

)



cos

(

θ
A

)


f





x
2

+

y
2

+

f
2









(
17
)







According to formula (10), the formula is obtained:












dr
=





(
dX
)

2

+


(
dY
)

2

+


(
dZ
)

2









=





(

dX
dZ

)

2

+


(

dY
dZ

)

2

+

1
·
dZ









=



1
f






x
2

+

y
2

+

f
2



·
dZ









(
18
)







Backscatter is obtained by integrating the depth from the camera to the scattering point:






L
b
c(x)=∫0ZsEce−σcdxβcP(g,α)e−σcrdr  (19)

    • where Zs represents depth, βc represents an extinction coefficient, and σc represents a scattering coefficient.


The formula is simplified to make







η
=


1
f





x
2

+

y
2

+

f
2





,




Mc=Ecβc to obtain the backscattering component below, which is formed by focal length, scattering angle, depth, and extinction coefficient.












L
b

c

(

x
,
y

)

=



E
c



e


-

σ
c




Y
u



sec
(

θ
z

)





β
c



P

(

g
,
α

)


η




0

Z
s




e


-


σ
e

[



-

y
f




sec
(

θ
z

)


+
η

]



Z



dZ



=




M
c



e

σ


Y
u



sec
(

θ
z

)




f

η

σ



P

(

g
,
α

)




1
-

e


-


σ
f

[


f

η

+

ysec

(

θ
z

)


]




Z
s





[


f

η

+

y


sec

(

θ
Z

)



]








(
20
)







The pixel fitting and parameter calculation comprise:


According to formulas (6), (12) and (20), the formula for a clear image can be obtained:











J
c

(
x
)

=




I
c

(
x
)

-



L
b

c

(

x
,
y

)



e

-


σ
c

[



(


Y
u

-


y
f


Z


)



sec
(

θ
z

)


+
r

]








(
21
)









    • where Ic(x) is the input image to be restored, Lbc(x, y) is a backscattering item and









e

-


σ
c

[



(


Y
u

-


y
f


Z


)



sec
(

θ
z

)


+
r

]








    •  is a direct radiation item.





After the vignetting in the image is removed, the image plane coordinates are transformed into a real coordinate system for the calculation of de-scattering. The image is sampled with a step size of 50 pixels, and a method similar to a dark channel prior is used for the sampling points to find the part where the direct radiance is close to 0, which is set as a set V. For the set V, unknown parameters Mc, σ, Yu, θz, f, θz, θA, Zs in the formula are calculated by an optimal fitting method. Finally, the backscattering part on the entire image is calculated through the obtained parameters, the backscattering item is subtracted, and the result is divided by the attenuation item to obtain a final result.


1. Experimental Preparation


To verify the accuracy of the model, an experimental environment can be arranged as shown in FIG. 6. A water tank filled with scattering media (diluted milk) is used for the experiment. The light source is the sun, and a water tank with a size of 90×90×45 cm is used, which is filled with clean water. Milk is added to the water tank to increase the turbidity of the scattering environment for several times. Experiments are designed to test and verify our image restoration model. Since the equations are related to the actual physical values, the following settings can be made to avoid the interference of objective factors on the calculation of physical parameters:


Except for the side where a lens is placed, a black shading cloth is used to block all the inside and outside surfaces of the water tank, including the bottom surface of the water tank, to prevent calculation interference caused by the reflection of sunlight in the glass.


A compass is used to calibrate the direction of the water tank so that the north-south direction of the water tank is parallel to the north and south poles of the earth's axis; and the water tank is put on a level roof to prevent calculation inaccuracy of the zenith angle.


After the milk is put, the solution is stirred evenly, stood and then shot after the water surface is still.


The functions of the camera, such as automatic light supplement, automatic adjustment and peripheral light correction are turned off, and the most original raw data are used for processing.


When pictures are shot, water droplets that are adsorbed on the glass surface on the shooting side are removed or avoided as much as possible.


When pictures are shot, the camera is held horizontally and prevented from tilting as much as possible; otherwise, the calculation of parameters such as the direction angle may be inaccurate.


2. Experimental Process


2.1 Software Environment Configuration


The configuration of a computer platform in this experiment is: CPU processor Intel® Core™ i9-10900K CPU @ 3.70 GHz, a cache of 32G, and a graphics card Nvidia GeForce RTX2080 SUPER. A software environment is Windows 10 Professional Edition. A software development environment in the experiment is Matlab2020b.


2.2 Parameter Calculation


To calculate the backscattering component Ib, some regions with pixel values close to the backscattering values and the direct radiation component equal to 0 need to be found.






I
d
c(x)=0  (22)






I
c(x)=Ibc(x),c∈{R,G,B}  (23)


Therefore, some pixels that only contain scattering components and have similar depths need to be extracted to estimate physical parameters. Thus, firstly, the input image is converted from RGB space to HSV space, and saturation channels and intensity channels are extracted. Then, the optimal global threshold is calculated by using graythresh function in Matlab, i.e., the OSTU maximum between-class variance method, and some regions with high intensity and saturation are extracted. In these regions, further sampling at 50-pixel intervals in the x and y directions of the image is conducted to generate a point set V. For each point belonging to V, the minimum value of five adjacent pixel regions is taken as follows.












J
c

(
x
)

=


min

y


Ω

(
x
)



(


I
c

(
x
)

)


,

x

V





(
24
)







Formula (24) is inspired by the dark channel prior method proposed by He et al. The difference is that the present invention does not extract the minimum value of different color channels, because the calculation can be adapted to more light source types. In addition, to ensure that the points used for the calculation contain only the backscattering item, i.e., Jc(x)≈0, the modified DCP-like rule is used on the point set V, rather than used on the entire image.


2.3 Curve Fitting


The corresponding computational physical parameters are calculated by using the extracted regions which contain only backscattering. The backscattering item proposed in formula (20) contains eight parameters Mc, σc, Yu, g, f, θz, θA, Zt. Theoretically, for an equation with eight parameters, the analytical solutions of all parameters can be calculated by taking ten or more points. However, it is difficult and time-consuming to solve one by one, and the point selection is greatly affected by noise, so fitting is a more time-saving method. Because too many factors of underwater disturbance lead to a lot of noise, and there are too many parameters, the solution of the equation is easily shifted in the fitting process and it is necessary to optimize the parameters. In the fitting calculation, each parameter has three corresponding values, which are an initial value, an upper fitting limit and a lower fitting limit. It is especially important to choose the right starting point.


In the experiment, the milk is used as a scattering medium. According to the calculation of the anisotropy parameters of the milk in previous work, the value range of g can be roughly reduced from [−1, 1] to [0, 1]. Because the amount of dripping milk is much smaller than the volume of the water tank, the initial value of g can be set to a smaller value of 0.1. As shown in the figure, g mainly affects the slope along a vertical axis in three-dimensional coordinates, which is reflected in a brightness difference and gradually changes with the increase of water depth.




















TABLE 1





Parameter
M
Yu
σ
f
g
θA
θZ
Zt
x
y
z


























Value
64.53
120
0.0008056
95.49
0.2837
0.3787
40.04
978.8





range
+
+
[0, 1]
105

30
45
800
[0, Xs]
[0, Xs]
+









By using a control variable method, a mathematical relationship obtained by substituting the parameters shown in Table 1 into the backscattering function can be known. θA represents an azimuth angle from the light source to the camera, which mainly controls the tilting of the pixel value in the x-axis direction. As shown in FIGS. 7A-7C, when θA of two functions is symmetric about 180°, the pixel value z is symmetric about x=xs/2, and tilting directions are also different between [0, π] and [0, π]. xs is a maximum value in the image coordinate system. Mathematical derivation is as follows.


The function is defined:










f

(

x
,

θ
A


)

=


cos

(
α
)

=




sin

(

θ
z

)



sin

(

θ
A

)


x

+


cos

(

θ
z

)


y

+


sin

(

θ
z

)



cos

(

θ
A

)


f





x
2

+

y
2

+

f
2









(
25
)








Then









f

(

-
x

)

=

f

(


2

π

-

θ
A


)





(
26
)







A phase function is:










P

(
α
)

=


1

4

π


·


1
-

g
2




[

1
+

g
2

-

2


gf

(
x
)



]


3
2








(
27
)







Obviously,






P(−x)=P(2π−θA)  (28)


However, in formula (20), the phase function is the only part that contains x and θA, and thus is a function of backscattering.






L
b(−x)=Lb(2π−θA)  (29)


Because formula (20) takes the image center as a coordinate origin, and a first pixel in the image is usually used as the coordinate origin in calculation, the above formula can be changed to






L
b(xs−x)=Lb(2π−θA)  (30)


In addition, as shown in FIGS. 7D-7F, when θA=kπ, the overall image pixel value is symmetric about x=xs/2. Thus, theoretically, for the same image, θA has only one solution. In fitting, the value of θA should be set between 0° and 360°; the initial value should be set in steps of 10° with traversing of 10°-350°; the lower limit should be set in steps of 10° with traversing of 0°-340°; and the upper limit should be set in steps of 10° and traversing of 20°-360°. All fitted values in the traversing process are compared to determine the interval and initial value of θA.


θZ represents the zenith angle of the light source. As shown in FIG. 8, the change of θZ mainly controls the change of the function amplitude. However, when θZ increases, a numerator and a denominator of the fraction in formula (20) are increased, so there is no qualitative relationship with θZ and the overall pixel value. In fitting, the value of θZ is between 0° and 90°; the initial value should be set in steps of 5° with traversing of 5°-85°; the lower limit should be set in steps of 5° with traversing of 0°-80°; and the upper limit should be set in steps of 5° and traversing of 10°-90°. All fitted values in the traversing process are compared to determine the interval and initial value of θZ.


The parameters σc and g are related to the scattering medium. Friscad et al. proposed a theoretical model to calculate the scattering properties of participating media, and Narasimhan et al. measured the scattering parameters of 40 common materials, including g, by increasing the concentration of diluted media. As shown in FIG. 9A, g mainly affects the slope along z axis on the y-z coordinate plane. In fitting, the step size of traversing g is set as 0.1, and [0,1] is traversed to find the most suitable initial value. As shown in FIG. 9B, σc represents an attenuation coefficient which varies with the overall amplitude of the function and hardly affects the gradient of Lb to x, y, z. According to the research of Narasimhan and Jerlov, this value is generally in the order of 10-5/mm to 10-3/mm, so the initial value of this value can be set as the order of 10-4, the upper and lower limits are respectively 10-5 and 1, and the traversing step size is 0.0001.


The focal length f mainly controls the curvature on the x axis and y axis of the function, as shown in FIG. 10, which is reflected as the size of the gradient to y or z, and the difference between the maximum value and the minimum value of backscattering. This can be explained by the mechanism of camera shooting. When the camera does not move, the larger the focal length is, the narrower the field of view is, and the brightness changes in a curve. Thus, the change of the focal length will cause the change of the brightness difference in the image. For the camera Cannon 5D Mark2, the upper and lower limits of [24, 105] and the step size of 10 can be directly taken in all fitting calculation for fitting. In addition, direct reading from the EXIF tag of the pictures can be considered.


As shown in FIG. 11, the parameters Zt, Yu, M mainly affect the overall amplitude and rarely change the slope. Zt represents a depth from the object to the camera. The larger Zt is, the more turbid the entire image is, and the deeper the degree of backscattering is. M is usually not a large value, and its product with f is close to the value of the background light. In addition, the distance between the upper and the lower limits needs to be controlled to be large enough, to prevent the incapability of taking all high-fitting solutions.


In fitting, because different parameters have different effects on the functions, fitting priorities should be set. For parameters Mc, Yu, σc, f, g, θA, θZ, Zt, the priorities are set as [5 8 1 4 7 6 2 3] respectively, and fitting is conducted according to the priority order, so as to select a result with the highest fitting degree.


The overall algorithm process is provided in algorithm 1.












Algorithm 1: Image restoration algorithm based on scattering model

















Input: Input the image I(x) to be restored.



Output: Clear image J(x).










1.
Removing vignetting;



2.
Conducting coordinate transformation;



3.
Finding the point set V that only contains the backscattering part;



4.
For the physical parameters MC, Yu, σC, f, g, θA, θZ, Zt in each channel;



5.
Conducting parameter fitting according to formula (20) and estimating the parameter values;



6.
Substituting the estimated parameter values back to formula (20) to calculate the







backscattering value and the attenuation value of each point on the image;










7.
Calculating and outputting the clear image.










3 Fitting Results and Comparative Experiments


For an image with high scattering as shown in FIG. 12A, the contents of the objects inside are almost invisible to the naked eye. The actual physical parameters corresponding to the image are calculated as shown in Table 2:

















TABLE 2





Parameter
M
Yu
σ
f
g
θA
θZ
Zt























Estimated value
182.1
100
0.001477
95.49
0.3307
27.87
48.68
753.7


Actual value

100

105

30
45
800









The restored image is shown in FIG. 12B, and it can be clearly seen that the effect of the present invention removes a considerable concentration of scattering, and the estimated physical parameters are also very close to the actual values. This can verify the correctness of this model.



FIG. 13 shows comparative experimental results between this method and other methods. Comparative methods are peng's method (IBLA), huang's method (RGHS), UDCP method and Song's method (ULAP) respectively. The naked eye can see the superiority of the present invention in improving image vision, and the present invention is more effective than other methods.


4 Conclusion


The present invention proposes a new physics-based method to restore densely scattered images. This method allows features of better processing non-uniform illumination, which is ignored in common atmospheric light scattering models. The present invention uses completely real physical parameters to describe the distribution of backscattering and direct radiation, and uses the idea of curve fitting to remove backscattering. It is believed that this work will open up a new way for research of de-scattering, underwater enhancement and image restoration in turbid media.

Claims
  • 1. An image restoration method based on a physical image formation model for descattering, characterized by comprising: establishing a model mapping relationship between a physical position of a light source and scattering, and dividing an attenuated image into a backscattering component and a direct radiation component for calculation; taking image sampling points and approximating a most real physical parameter through multiple fittings; and substituting the calculated physical parameter into a model to calculate an attenuation value and backscattering of the entire image, to restore a clear image.
  • 2. The image restoration method based on the physical scattering model according to claim 1, characterized in that the method comprises the following steps: step 1. calibrating the vignetting of a camera by an integrating sphere for the attenuated image;step 2. converting image pixel coordinates into coordinates in a world coordinate system;step 3. establishing a de-scattering mathematical model, dividing the attenuated image into two parts of direct radiation and backscattering, and deriving the model mapping relationship respectively;step 4. sampling a corrected image from which vignetting is removed, conducting OSTU binarization processing, and obtaining a partial pixel region of pure scattering for fitting and parameter calculation;step 5. removing backscattering and attenuation from the entire image by the calculated parameters.
  • 3. The image restoration method based on the physical scattering model according to claim 1, characterized in that calibrating the vignetting of a camera by an integrating sphere comprises: establishing a vignetting table;taking an original attenuated image by the integrating sphere, obtaining a correction factor at each pixel position, and using the coefficient to eliminate vignetting for the original attenuated image pixel by pixel by a look-up table method to obtain the corrected image.
  • 4. The image restoration method based on the physical scattering model according to claim 1, characterized in that a direct radiation item can be expressed by the following formula:
  • 5. The image restoration method based on the physical scattering model according to claim 1, characterized in that a backscattering item can be expressed by the following formula:
  • 6. The image restoration method based on the physical scattering model according to claim 1, characterized in that the clear image is represented by the following formula:
  • 7. The image restoration method based on the physical scattering model according to claim 1, characterized in that obtaining a partial pixel region of pure scattering comprises extracting pixels that only contain scattering components and have similar depths for estimating physical parameters: a. converting the input image from a RGB space to an HSV space, and extracting a saturation channel and an intensity channel;b. calculating an optimal global threshold by OSTU maximum between-class variance method to obtain image regions with high intensity and saturation;c. further sampling at intervals in x and y directions of the image to generate a point set V;d. applying an improved DCP-like rule within the point set V of the image:
  • 8. The image restoration method based on the physical scattering model according to claim 5, characterized in that the physical parameters comprise an incident zenith angle of the sun, a direction angle, a focal length of the camera, illuminance, an extinction coefficient, a depth from an object to the camera, an anisotropy constant, and a distance from the object in water to a water surface.
  • 9. The image restoration method based on the physical scattering model according to claim 8, characterized in that during fitting and parameter calculation, since the influences of different parameters on functions are different, fitting and solving are performed in sequence according to the fitting priority of the parameters.
  • 10. The image restoration method based on the physical scattering model according to claim 1, characterized in that the method is used for restoring images under non-uniform illumination according to the relationship between the light source position and scattering distribution.
Priority Claims (1)
Number Date Country Kind
202111261649.5 Oct 2021 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2022/072313 1/17/2022 WO