A Soft Measurement Method For Dioxin Emission Of Grate Furnace MSWI Process Based On Simplified Deep Forest Regression Of Residual Fitting Mechanism

Information

  • Patent Application
  • 20240419872
  • Publication Number
    20240419872
  • Date Filed
    April 26, 2023
    2 years ago
  • Date Published
    December 19, 2024
    11 months ago
  • CPC
    • G06F30/27
    • G06F2111/10
  • International Classifications
    • G06F30/27
    • G06F111/10
Abstract
The invention provides a soft measurement method for dioxin emission of grate furnace MSWI process based on simplified deep forest regression of residual fitting mechanism. The highly toxic pollutant dioxin (DXN) generated in the solid waste incineration process is a key environmental index which must be subjected to control. The rapid and accurate soft measurement of the DXN emission concentration is an urgent affair for reducing the emission control of the pollutants. The method comprises the following steps: firstly, carrying out feature selection on a high-dimensional process variable by adopting mutual information and significance test; then, constructing a simplified deep forest regression (SDFR) algorithm to learn a nonlinear relationship between the selected process variable and the DXN emission concentration; and finally, designing a gradient enhancement strategy based on a residual error fitting (REF) mechanism to improve the generalization performance of a layer-by-layer learning process. The method is superior to other methods in the aspects of prediction precision and time consumption.
Description
TECHNICAL FIELD

The invention belongs to the field of solid waste incineration.


BACKGROUND

Municipal solid waste (MSW) treatment aims to achieve harmlessness, reduction and resource utilization, of which MSW incineration (MSWI) is currently the main method. However, MSWI process is also one of the main industrial processes currently emitting dioxins (DXN), a highly toxic organic pollutant, accounting for approximately 9% of total emissions. MSWI mainly uses technologies such as grate furnaces, fluidized beds and rotary kilns, among which grate furnace technology accounts for the largest proportion. The optimized operation of the MSWI process based on the grate furnace has an important contribution to the reduction of DXN emissions. Therefore, it is necessary to conduct high-precision real-time detection of DXN emission concentration.


Data-driven soft measurement technology can effectively solve the above problems, that is, using machine learning or deep learning methods to characterize the correlation between easily measurable process variables and DXN emission concentrations. This usually requires determining a mapping function to predict DXN emission concentrations. For example, genetic programming is combined with neural network (NN) to model DXN emissions, but it is not suitable for different types of incineration plants; the design is based on back-propagation NN (BPNN).), but its portability is not good, and BPNN has serious over-fitting problems when facing small sample problems; it adopts selective integration and evaluation variable projection importance strategies, and uses support vector machines and The nuclear latent structure mapping algorithm selects valuable process variables to construct the DXN soft sensor model, but it cannot represent depth features.


Based on 12 years of DXN data of an 800-ton grate furnace, a simplified deep forest regression (SDFR) method (SDFR-ref) with high accuracy and short time-consuming residual fitting mechanism was proposed. The main innovations of this article include: using decision trees to replace complex forest algorithms, thereby reducing the size of the deep forest ensemble model; using a residual fitting strategy with learning factors between cascade layers to give the model higher predictive performance; Mutual information (MI) and significance test (ST) are used for feature selection to simplify the input of the soft sensor model. In China incineration is the main MSW treatment method, and its typical process is shown in FIG. 1.


As shown in FIG. 1, the MSWI process flow based on the grate furnace includes six stages: solid waste storage and transportation, solid waste incineration, waste heat boiler, steam power generation, flue gas purification and flue gas emission. At present, MSWI factories are mainly concentrated in coastal areas, and more than 90% of them use grate furnaces. The grate-type MSWI process has the advantages of large daily processing capacity, stable operation, and low DXN emission concentration. The detection of DXN emission concentration in this article is aimed at the “smoke G3” position in the flue gas emission stage.


The MSWI plant studied in this article was ignited and put into operation in 2009.


From 2009 to 2013, the emission level of DXN was not higher than China's environmental emission standard (GB18485 2001), which is Ing I-TEQ/Nm3 (oxygen content is 11%). Correspondingly, the number of DXN detections increased year by year, and finally stabilized at 4 times/year. Since 2014, my country has revised the emission limit of DXN (updated from Ing I-TEQ/Nm3 to 0.1 ng I-TEQ/Nm3). Obviously, increasingly stringent emission restrictions have led to a gradual increase in the number of DXN tests by enterprises and governments, and the operating costs of enterprises have also increased accordingly.


SUMMARY

The invention aims to explore how to use MSWI process data and limited DXN detection data to establish a DXN soft measurement model to provide key indicator data for MSWI companies' for their DXN emission reduction optimization control and cost reduction.


The invention proposes a modeling strategy based on feature selection and SDFR-ref. The structure is shown in FIG. 2.


As can be seen from FIG. 2, the proposed modeling strategy includes a feature selection module based on MI and ST and a SDFR module based on the residual fitting mechanism. Feature selection module selects the corresponding features by calculating the MI value and ST value of each feature; for SDFR module, Layer-k represents the k-th layer model, ŷ1Regvoc represents the output vector of the first layer model, v1Augfea represents the augmented regression vector of the second layer input, {circumflex over (y)}kRegvoc represents the average value of ykRegvoc, α is the remaining learning rate between each layer; x and XXsel respectively represents the process data before and after feature selection; y, ŷ and e are the true value, predicted value and prediction error respectively.


In addition, {δMI, δSL, θ, T, α, K} represents the learning parameter set of the proposed SDFR-ref, where: δMI represents the threshold of MI, δSL represents the threshold of significance level, and θ represents the minimum sample in the leaf node number, T represents the number of decision trees in each layer of the model, a is the learning rate in the gradient boosting process, and K represents the number of layers. The globally optimized selection of these learning parameters can improve the synergy between different modules, thereby improving the overall performance of the model. Therefore, the proposed modeling strategy can be formulated as solving the following optimization problem:










min



RMSE

(


F

SDFR
-
ref


(
·
)

)


=




(
1
)
















1
N






n
=
1

N


(

(



1
N






n
=
1

N


y
n



+

α


1
T






k
=
1

K





t
=
1

T


[


c

1
,
l

CART

,


,

c

T
,
l

CART


]













I

R

M
×
N



(

X

Xse

l


)


)

-

y
n


)

2






s
.
t
.

{





X

X

s

e

1


=


f

FeaSe

l


(

D
,

δ

M

I


,

δ
SL


)







0
<
α

2






1

T


5

0

0







1

θ

N






1

K


2

0







0


δ
MI


1






0


δ
SL


1









Among them, FSDFR-ref(⋅) represents the SDFR-ref model; fFeaSel(⋅) represents the nonlinear feature selection algorithm proposed in this article; N represents the number of modeling samples; yn represents the n-th true value; c1,lCART represents predicted value of l-th leaf node of first CART, CT,lCART represents the predicted value of the l-th leaf node of T-th CART; D={X, y|X∈RN×M, y∈RN×1} represents the original modeling data, which is also represents the input of the feature selection algorithm, M is the number of original features; IRM×N(XXsel) is the indicator function, when XXsel ∈RM×N, IRM×N(Xsel)=1, when XXsel ∈RM×N, IRM×N(XXsel)=0.


4.1 Feature Selection Based on MI and ST

MI and ST are used to calculate the information correlation between the original features (process variables) DXN values, and achieve the best selection of features through preset thresholds.


For the input data set, the nonlinear feature selection algorithm fFeaSel(⋅) proposed in the invention is defined as follows:










D
Sel

=


f
FeaSel

(

D
,

δ

M

I


,

δ

S

L



)






(
2
)








Among them, DSel={XSel, y|X∈RN×MSel, y∈RN×1} respectively represent the output of the proposed feature selection algorithm, and MSel is the number of selected features.


In fact, MI does not need to assume the potential joint distribution of the data. MI provides an information quantification measure of the degree of statistical dependence between random variables, and estimates the degree of interdependence between two random variables to express shared information. The calculation process is as follows:











I
i
MI

(


x
i

,
y

)

=




x

n
,
i







y
n





p

(


x

n
,
i


,

y
n


)




log
2




p

(


x

n
,
i


,

y
n


)



p

(

x

n
,
i


)

,

p

(

y
n

)











(
3
)








Among them, xi is the i-th eigenvector of x, xn,i is the n-th value of the i-th eigenvector, y represents the joint probability density; p(xn,i) and p(yn) represent the marginal probability density of xn,i and yn.


If the MI value of a feature is greater than the threshold δMI, it is regarded as an important feature constituting the preliminary feature set XMI. Furthermore, ST is used to analyze the correlation between the selected features based on MI and remove collinear features.


The Pearson coefficient value PCoe between the selected features xiMI and xjMI is calculated as follows:









PCoe
=





n
=
1

N



(


x

n
,
i

MI

-


x
¯

i
MI


)



(


x

n
,
j

MI

-


x
¯

j
MI


)





{




n
=
1

N




(


x

n
,
j

MI

-


x
¯

i
MI


)

2






n
=
1

N



(


x

n
,
j

MI

-


x
¯

j
MI


)

2




}


1
/
2








(
4
)








Among them, xiMI and xjMI represent the average value of xiMI and xiMI respectively, xn,iMI and xn,jMI represent the n-th value of xiMI and xjMI. Z-test is used to calculate the Ztest value between features xiMI and xjMI:










z
test

=




x
_

i
MI

-


x
_

j
MI






S
i
2

/

N
i


+


S
j
2

/

N
j










(
5
)








Among them, Si and Sj represent the standard deviation of xiMI and xjMI; Ni and Nj represent the number of samples of xiMI and xjMI.


Furthermore, the p-value is obtained by looking up the Ztest value in the table. At this point, we assume in H0 that there is no linear relationship between the i-th and j-th features, and the Pearson coefficient PCoe is regarded as the alternative hypothesis H1. Based on the comparison of p-value and significance level δSL, the final selected XXsel including the preferred features is determined. The criteria are expressed as follows:









{








Accept




H
1

(






linearly


dependent

)


,






reject



H
0




(

linearly


independent

)









p
-
value


<

δ
SL











Accept




H
0

(






linearly


independent

)


,






reject



H
1




(

linearly


dependent

)









p
-
value


>

δ
SL









(
6
)







Based on the above assumptions, the collinear features selected by MI are removed, thereby reducing the impact of data noise on the training model.


4.2 SDFR (SDFR-Ref) Based on Residual Fitting Mechanism
4.2.1 First Layer Implementation

The training set after feature selection is recorded as DSel. The SDFR algorithm replaces the forest algorithm in the original DFR with a decision tree, that is, CART. Each layer contains multiple decision trees, and the tree nodes are divided using the squared error minimization criterion. The minimum loss function of this process is expressed as follows:










Split
CART

=

min

[






x
i
Xsel



R
Left






(


y
Left

-

c
Left
CART


)

2


+





x
i


X

se



l




R
Right






(


y
Right

-

c
Right
CART


)

2



]





(
7
)







Among them, cLeftCART and cRightCART are the outputs of RLeft and RRight nodes respectively; yLeft and yRight represent the true values in RLeft and RRight nodes respectively.


Specifically, the nodes are determined in the following way:









{






R
Left

(

j
,
s

)

=

{


x
Sel

|


x
j
Sel


s


}









R
Right

(

j
,
s

)

=

{


x

Se

1


|


x
j

Se

1


>
s


}









(
8
)







Among them, j and s represent segmentation features and segmentation values respectively; xjSel is the j-th eigenvalue of the selected feature xSel. Therefore, CART can be expressed as:











h
1
CART

(

x
Sel

)

=




l
=
1

L



c
l
CART




I

R
l
CART


(

x
Sel

)







(
9
)







Among them, L represents the number of CART leaf nodes, clCART represents the output of the l-th leaf node of CART, and IRlCART(xSel) is the indicator function, when xSel ∈RlCART, IRlCART (xSel)=1, when xSel∉RlCART, IRlCART(xSel)=0.


The first-level model containing multiple CARTs is represented as follows:











f
1
SDFR

(

x

Se

1


)

=


1
T






t
=
1

T



h

1
,
t

CART

(
·
)







(
10
)







Among them, f1SDFR(⋅) represents the first layer model in SDFR, T represents the number of CARTs in each layer model, h1,tCART(⋅) represents the t-th CART model in layer 1.


Furthermore, the first-layer regression vector ŷ1Regvec from the first-layer model f1SDFR(⋅) is expressed as follows:











y
^

1
Regvec

=


[



h

1
,
1

CART

(
·
)

,


,


h

1
,
T

CART

(
·
)


]

=

[


c

1
,
l

CART

,


,

c

T
,
l

CART


]






(
11
)







Among them, c1,lCART represents the predicted value of the l-th leaf node of the first CART, represents the predicted value of the l-th leaf node of the T-th CART.


The augmented regression vector V1Augfea is obtained by merging the layer regression vectors ŷ1Regvec and is expressed as follows:










v
1
Augfea

=


f
FeaCom
1

(



y
^

1
Regvec

,

x
Sel


)





(
12
)







Among them, fFeaCom1(⋅) represents the eigenvector combination function.


v1Augfea is then used as the feature input for the next layer. In the invention, the DXN true value is no longer used in subsequent cascade modules, but the new true value is recalculated through the gradient boosting strategy. Therefore, the invention uses the following formula to calculate the loss function of the squared error:











L
1
SDFR

(


y
n

(
1
)


,



f
1
SDFR

(
·
)

n


)

=


1
2






i
=
1

N



(


y
n

(
1
)


-



f
1
SDFR

(
·
)

n


)

2







(
13
)







Among them, L1SDFR(⋅) represents the squared error loss function in SDFR-ref; yn(1) represents the n-th true value of the first layer training set.


The loss function L1SDFR is further used to calculate the gradient direction as shown below.










σ

1
,
n

SDFR

=

-


[




L

(


y
n

(
1
)


,


f
1
SDFR

(
·
)


)






f
1
SDFR

(
·
)



]




f
1
SDFR

(
·
)

=


f
0
SDFR

(
·
)








(
14
)







Among them, σ1,nSDFR is the gradient of the nth true value of layer 1; f0SDFR(⋅) represents the arithmetic mean of the initial true value, that is









f
0
SDFR

(
·
)

=


1
N






n
=
1

N


y
n




,




yn represents the n-th true value.


Then, the objective function is:











f
1
SDFR

(

x
Sel

)

=



f
0
SDFR

(
·
)

+

α





t
=
1

T



[


c

1
,
l

CART

,


,

c

T
,
l

CART


]




I
R

(

x
Sel

)









(
15
)







Among them, f1SDFR(⋅) is the first layer model; α represents the learning rate; IR(xSel) represents when xSel ∈R, IR(xSel)=1, when xSel∈R, IR(xSel)=0.


Therefore, the true value of the second level is:










(
16
)










y
2

=


y
-


f
0
SDFR

(
·
)

-

α



f
1
SDFR

(
·
)



=



y
1

-

α



f
1
SDFR

(
·
)



=


y
1

-

α




y
^

_

1
Regvec









Among them, y1 is the true value of the first layer model, that is, y1=y, y is the true value vector of DXN; {circumflex over (y)}1Regvec represents the mean value of the first layer regression vector.


4.2.2 k-th Layer Implementation


The training set of the k-th layer based on the augmented regression vector of the (k−1)-th layer is expressed as DkSel={{v(k−1),nAugfea}n=1N, yk}, v(k−1)Augfea is the augmented regression vector of the (k−1)-th layer, and yk is the k-th true value.


First, establish the k-th level decision tree hkCART(⋅) according to formulas (7) and (8). The k-th level model is expressed as follows:











f
k
SDFR

(

v


(

k
-
1

)

,
i

Augfea

)

=


1
T






t
=
1

T



h

k
,
t

CART

(
·
)







(
17
)







Among them, fkSDFR(⋅) represents the k-th layer model, and hk,tCART(⋅) represents the k-th layer of the t-th CART model.


Then, the augmented regression vector vkAugfea of the k-th layer is expressed as follows:










v
k
Augfea

=


f
FeaCom
k

(



y
^

1
Regvec

,

x
Sel


)





(
18
)







Among them, ŷkRegvec represents the regression vector of the k-th layer, that is, ŷkRegvec=[hk,1CART(⋅), . . . , hk,TCART(⋅)].


Then, calculate the gradient σkSDFR according to formulas (12) and (13). The true value of (k+1)-th layer is expressed as follows:










y

k
+
1


=


y
1

-

α

(




y
^

¯

1
Regvec

+

+



y
^

¯

k
Regvec


)






(
19
)







4.2.3 K-th Layer Implementation

The K-th layer is the last layer of the SDFR-ref training process, that is, the preset maximum number of layers, and its training set is DKSel={{v(K−1),nAugfea}n=1N,yK}.


First, build a decision tree model hKCART(⋅) through the training set DKSel and further obtain the K-th layer model fKSDFR(⋅). Then, calculate the K-th layer regression vector ŷKRegvec according to the input augmented regression vector v(K-1)Augfea, which is expressed as follows:











y
^

K
Regvec

=

[



h

K
,
1

CART

(
·
)

,


,


h

K
,
T

CART

(
·
)


]





(
20
)







Among them, hk,1CART(⋅) represents the first CART model of the K-th layer, hK,TCART(⋅) represents the T-th CART model of the K-th layer.


Finally, the output value after gradient boosting with learning rate α is:










y
k

=


y
1

-

α





k
=
1


(

K
-
1

)





y
^

¯

k
Regvec








(
21
)







Among them, {circumflex over (y)}kRegvec represents the mean value of the k-th layer regression vector.


4.2.4 Prediction Output Implementation

After multiple layers are superimposed, each layer is used to reduce the residual of the previous layer. Finally, the SDFR-ref model can be expressed as:











F

SDFR
-
ref


(

x
Sel

)

=





k
=
1

K



f
k
SDFR

(
·
)


=

α





k
=
1

K





t
=
1

T



[


c

1
,
l

CART

,


,

c

T
,
l

CART


]




I
R

(

x
Sel

)










(
22
)







Among them, IR(xSel) means IR(xSel)=1 when xSel∈R, and IR(xSel)=0 when xSel∉R.


Since FSDFR-ref(⋅) is calculated based on addition, the final predicted value cannot be simply averaged. Therefore, it is necessary to first calculate the mean value of the regression vector of each layer. Taking layer 1 as an example, it is as follows:














y
^

1
add

=



1
T






t
=
1

T



y
^

1
Regvec









=



1
T






t
=
1

T


[



h
1
CART

(
·
)

,


,


h
T
CART

(
·
)


]









=



1
T






t
=
1

T


[


c

1
,
l

CART

,


,

c

T
,
l

CART


]










(
23
)







Add K predicted values to get the final predicted value, as shown below:










y
^

=



1
N






n
=
1

N


y
n



+

α


1
T






k
=
1

K





t
=
1

T



[


c

1
,
l

CART

,


,

c

T
,
l

CART


]




I

R

M
×
N



(

X
Sel

)










(
24
)







Among them, ŷ is the predicted value of SDFR-ref model; means IR(xSel)=1 when xSel ∈R, and IR(xSel)=0 when. xSel∉R





DESCRIPTION OF DRAWINGS


FIG. 1 is the typical process flow of MSWI based on grate furnace;



FIG. 2 is the modeling strategy proposed in the invention.





EMBODIMENTS

This embodiment uses a real DXN data set to verify the effectiveness of the proposed method. The DXN data comes from the actual MSWI process of an incineration plant in Beijing in the past 12 years, including 141 samples and 116 process variables. The process variables cover the four stages of MSWI, namely solid waste incineration, waste heat boiler, flue gas purification and flue gas emission, and Table 1 shows the detailed information.









TABLE 1







type of the procedure variable









stage













waste





solid waste
heat
flue gas
flue gas


procedure variable
incineration
boiler
treatment
emission





Temperature
42
 5
 6
/


Velocity
18
/
/
/


Flux
15
 5
 6
/


Pressure
 2
 7
/
/


Liquid level
/
 1
/
/


Concentration
/
/
 1
8


Total
77
18
13
8









116









The sample sizes of the training, validation and test sets are respectively ½, ¼ and ¼ of the original sample data.









TABLE 2







Abbreviations of procedure variables










Stage
procedure variables
uint
Abbreviations





solid waste
combustion temperature 1
° C.
T1


incineration
combustion temperature 2
° C.
T2



combustion temperature 3
° C.
T3



maximum temperature at which a grate burns
° C.
T4



temperature if the dry grate left inlet
° C.
T5



temperature if the dry grate right inlet
° C.
T6



temperature in the left side of the drying and
° C.
T7



burning sections of inner grate wall





temperature in the left side of the drying and
° C.
T8



burning sections of outer grate wall





temperature in the right side of the drying and
° C.
T9



burning sections of inner grate wall





temperature in the right side of the drying and
° C.
T10



burning sections of outer grate wall





left inner temperature of combustion grate 1-1
° C.
T11



left outer temperature of combustion grate 1-1
° C.
T12



right inner temperature of combustion grate 1-1
° C.
T13



right outer temperature of combustion grate 1-1
° C.
T14



left inner temperature of combustion grate 1-2
° C.
T15



left outer temperature of combustion grate 1-2
° C.
T16



left inner temperature of combustion grate 2-1
° C.
T17



left outer temperature of combustion grate 2-1
° C.
T18



Outlet air temperature of primary air preheater
° C.
T19



air temperature of the combustion grate inlet
° C.
T20



temperature of cooling air outlet
° C.
T21


flue gas
temperature of fluidization fan outlet
° C.
T22


purification





solid waste
air flux of the left combustion grate
km3N/h
LAF1


incineration





waste heat
cooling water flux of the secondary superheater
t/h
CWF1


boiler





flue gas
supply flux of urea solvent
L/h
FUS1


treatment
Bag pressure difference
kPa
BP1


flue gas
O2 concentration of CEMS system
%
OC1


purification
Dust concentration of CEMS system
mg/m3N
DC1



HCL concentration of CEMS system
mg/m3N
HC1



CO2 concentration of CEMS system
%
CC1









First calculate the MI value between the 116 process variables and the DXN emission concentration. The invention sets the threshold value δXsel of MI=0.75 to ensure that the amount of information between the selected process variable and the DXN emission is as large as possible, the initial number of features selected is 30; Further, the significance level is set δSL=0.1 and the final selected process variable are T2, T4, T5, T6, T7, T9, T10, T16, T20, T21, LAF1, FUS1, DC1 and CC1, 14 in total. The linear correlation between the selected process variables is weak, which demonstrates the effectiveness of the method used.


In this embodiment, the hyperparameters of SDFR-ref are empirically set as follows: the minimum number of samples is 3, the number of random feature selections is 11, the number of CARTs is 500, the number of layers is 500, and the learning rate is 0.1. RF, BP neural network (BPNN), XGBoost, DFR, DFR-clfc and ImDFR modeling methods are used for experimental comparison. The parameter settings are as follows: 1) RF: the minimum number of samples is 3, the number of CART is 500, and the random feature selection is 11; 2) BPNN: The number of hidden layer neurons is 30, the convergence error is 0.01, the algebra is 1500, and the learning rate is 0.1; 3) XGBoost: The minimum number of samples is 3, the number of XGBoost is 10, the regularization coefficient is 1.2, and the learning rate is 0.8; 4) DFR and DFR-clfc: the minimum number of samples is 3, the number of CART is 500, the number of random feature selection is 11, and the number of RF and CRF is 2 respectively.


The performance of the modeling method is evaluated using RMSE and R2, which are defined as follows:









RMSE
=





n
=
1

N




(


y
n

-


y
^

n


)

2

/

(

N
-
1

)








(
25
)















R
2

=

1
-




n
=
1

N




(


y
n

-


y
^

n


)

2

/




n
=
1

N



(


y
n

-

y
¯


)

2









(
26
)







Among them, yn represents the n-th true value, ŷn represents the n-th predicted value, y represents the average output value, and N represents the number of samples.


On this basis, 30 repeated experiments were conducted on seven methods, and Table 3 shows the statistical results. Table 4 gives the statistical results of training time.









TABLE 3







statistical results










RMSE
R2
















mean

optimum
mean

optimum


Method
set
value
variance
value
value
variance
value





RF
training
1.0993E−02
2.5498E−08
1.0704E−02
8.5783E−01
1.7106E−05
8.6522E−01



Validation
1.9794E−02
3.9919E−08
1.9471E−02
5.1479E−01
9.6301E−05
5.3056E−01



Test
1.6775E−02
6.1264E−08
1.6349E−02
5.9723E−01
1.4143E−04
6.1750E−01


BPNN
training
3.0495E−03
6.5539E−07
2.8748E−03
9.8832E−01
9.5015E−05
9.9028E−01



Validation
3.2603E−02
2.4818E−04
2.1896E−02
−6.1325E−01 
4.0544E+00
4.0635E−01



Test
3.1648E−02
2.1475E−04
1.8531E−02
−7.3037E−01 
3.3001E+00
5.0856E−01


XGBoost
training
1.0125E−02
0.0000E+00
1.0125E−02
8.7942E−01
3.1877E−31
8.7942E−01



Validation
2.5207E−02
1.2452E−35
2.5207E−02
2.1325E−01
1.9923E−32
2.1325E−01



Test
1.9748E−02
1.2452E−35
1.9748E−02
4.4189E−01
5.1004E−32
4.4189E−01


DFR
training
1.1508E−02
7.8541E−09
1.1347E−02
8.4422E−01
5.7639E−06
8.4855E−01



Validation
2.0654E−02
1.0405E−08
2.0463E−02
4.7175E−01
2.7248E−05
4.8151E−01



Test
1.7762E−02
1.6786E−08
1.7558E−02
5.4852E−01
4.3515E−05
5.5883E−01


DFR-
training
7.9183E−03
1.7761E−06
5.5822E−03
9.2423E−01
6.7227E−04
9.6335E−01


clfc
Validation
2.0084E−02
1.4533E−07
1.9410E−02
5.0034E−01
3.6156E−04
5.3348E−01



Test
1.6968E−02
9.9144E−08
1.6430E−02
5.8785E−01
2.3681E−04
6.1370E−01


ImDFR
training
7.7000E−03
/
/
9.2420E−01
/
/



Validation
2.3700E−02
/
/
1.3120E−01
/
/



Test
1.7900E−02
/
/
6.6360E−01
/
/


SDFR-
training
6.6200E−04
4.7281E−09
5.2456E−04
9.9950E−01
1.2323E−08
9.9970E−01


ref
Validation
2.1700E−02
6.9600E−07
2.0200E−02
4.1450E−01
2.1000E−03
4.9700E−01



Test
1.4500E−02
6.5875E−07
1.3100E−02
6.9780E−01
1.2000E−03
7.5300E−01
















TABLE 4







the statistical results of training time











Time












Method
mean value
variance
optimum value







RF
5.4138E+01
6.2333E−01
5.3153E+01



XGBoost
9.7248E+01
3.5522E−01
9.6595E+01



DFR
4.8513E+02
2.2753E+04
2.3745E+02



DFR-clfc
8.2871E+02
1.0154E+05
3.4013E+02



SDFR-ref
3.7039E+01
1.5538E+00
3.4474E+01










It can be seen from Table 3: 1) In the training set, the proposed method SDFR-ref has the average (6.6200E−04 and 9.9950E−01) and the best values (5.2456E−04 and 9.9970E−01) of RMSE and R2 has optimal results; since no randomness is introduced, the variance statistics of XGBoost is almost 0; 2) In the validation set, SDFR-ref has no obvious advantage, and its performance is only better than BPNN, XGBoost and ImDFR; RF, DFR and The generalization performance of DFR-clfc is almost the same; 3) In the test set, SDFR-ref has the best measurement accuracy (1.4500E−02) and fitting performance (6.9780E−01).


To sum up, SDFR-ref has more powerful learning capabilities compared with classic learning methods (RF, BPNN and XGBoost). In addition, SDFR-ref contrasts deep learning methods (DFR, DFR-clfc, ImDFR) to further enhance the implementation of the model based on the simplified forest algorithm. The performance of SDFR-ref in the test set also shows that its generalization ability is stronger than other methods. Therefore, the proposed method is effective for DXN prediction of MSWI processes.


Table 4 shows that the method proposed in the invention has a greater advantage in the average training time compared with the method that is also a decision tree.


The invention proposes a method based on SDFR-ref to predict the DXN emission concentration in the MSWI process based on the grate furnace. The main contributions are as follows: 1) The feature selection module based on mutual information and significance test effectively reduces the computational complexity and improves the prediction performance; 2) The decision tree is used instead of the forest algorithm in the deep integration structure, which has excellent training speed and learning ability. For DFR and DFR clfc; 3) Due to the introduction of residual fitting, the prediction accuracy of SDFR-ref is further improved. Experimental results show that compared with traditional ensemble learning and deep ensemble learning, SDFR-ref has better modeling accuracy and generalization performance, and the training cost is lower than the state-of-the-art ensemble models. Therefore, SDFR-ref is easier for practical applications.

Claims
  • 1. A soft measurement method for dioxin emission of grate furnace MSWI process based on simplified deep forest regression of residual fitting mechanism, comprising: a feature selection module based on Mutual information (MI) and significance test (ST) and a simplified deep forest regression (SDFR) module based on the residual fitting mechanism; wherein the feature selection module selects corresponding features by calculating MI value and ST value of each feature; for the SDFR module, Layer-k represents the k-th layer model, ŷ1Regvoc represents the output vector of the first layer model, v1Augfea represents augmented regression vector of the second layer input, {circumflex over (y)}kRegvoc represents the average value of ŷkRegvoc, α is the remaining learning rate between each layer; X and XXsel respectively represents the process data before and after feature selection; y, ŷ and e are the true value, predicted value and prediction error respectively;in addition, {δMI, δSL, θ, T, α, K} represents the learning parameter set of proposed SDFR-ref, where: δMI represents the threshold of MI, δSL represents the threshold of significance level, and θ represents the minimum sample in the leaf node number, T represents the number of decision trees in each layer of the model, a is the learning rate in the gradient boosting process, and K represents the number of layers; the globally optimized selection of these learning parameters being capable of improving the synergy between different modules, thereby improving the overall performance of the model; wherein the proposed modeling strategy is formulated as solving the following optimization problem:
Priority Claims (1)
Number Date Country Kind
202210218420.1 Mar 2022 CN national
PCT Information
Filing Document Filing Date Country Kind
PCT/CN2023/090771 4/26/2023 WO