This application claims priority to Chinese Patent Application No. 201510999765.5, filed on Dec. 27, 2015, entitled “An Intelligent detection method for Biochemical Oxygen Demand based on a Self-organizing Recurrent RBF Neural Network,” which is hereby incorporated by reference in its entirety.
This present disclosure relates to the field of intelligent control, and more particularly to methods and systems for intelligent detection of biochemical oxygen demand (BOD) values in an urban wastewater treatment process (WWTP).
Urban WWTP not only guarantees reliability and stability of wastewater treatment systems but also meets water quality of the national discharge standards. However, influence factors are various for BOD of WWTP and relationships between various influencing factors are complex. Therefore, it is difficult to make real-time detection for BOD. This seriously affects stable operations of urban WWTP. Intelligent detection method for BOD, based on a self-organizing recurrent RBF neural network, is helpful to improve efficiency, strengthen delicacy management, and ensure the effluent quality standards of urban WWTP. The method has great economic benefits as well as significant environmental and social benefits.
Urban WWTP makes water quality to meet national discharge standards, which are mainly related to parameters such as BOD, chemical oxygen demand (COD), effluent suspended solids (SS), ammonia nitrogen (NH3-N), total nitrogen (TN) and total phosphorus (TP) and so on. BOD refers to the needed oxygen for organic decomposition within the given time. At present, detection of BOD in wastewater is mainly performed by using dilution inoculation methods and microbial sensor determination methods. However, detection cycles are generally for 5 days, which can't reflect actual situations of WWTP in real-time. Thus it is difficult to perform closed loop controls of WWTP. Moreover, it is a big challenge for detecting the values of BOD due to a large amount of pollutants in wastewater and different contents. New hardware measuring instruments may be developed to directly determine various variables of WWTP and solve detection problems of water quality parameters due to complex organic matters in wastewater. However, research and development of new sensors will be costly and may be a time-consuming operation. Hence, new methods to solve the problem, such as real-time measurement of BOD values in WWTP, has become an important topic in both academic and practical fields.
In this disclosure, an intelligent detection method for BOD is presented by building a computing model based on a self-organizing recurrent RBF neural network. The neural network uses activity degrees and independent contribution of the hidden neuron to determine whether to add or delete hidden neurons and use a fast gradient descent algorithm to ensure the accuracy of the self-organizing recurrent RBF neural network. The intelligent detection method can achieve a real-time detection for BOD, reduce the cost of measurement for wastewater treatment plants, provide a fast and efficient approach of measurement, and improve benefits of wastewater treatment plants.
Implementations of the present disclosure relate to an intelligent detection method that is designed for measuring the BOD concentration based on a self-organizing recurrent RBF neural network in this disclosure. For this intelligent detection method, the inputs are variables that are easily measured and the outputs are estimates of the BOD concentration. By constructing the self-organizing recurrent RBF neural network, the implementations obtain the mapping between the auxiliary variables and the BOD concentration. In addition, the implementations can obtain real-time measurements of BOD concentration and solve problems of a long measurement cycle for BOD concentration.
This disclosure adopts the following technical schemes and implementations.
An intelligent detection method for the BOD concentration based on a self-organizing recurrent RBF neural network, and its characteristic and steps include the following steps:
(1) Determining the input and output variables of BOD: For activated sludge system of WWTP, the wastewater treatment variables are analyzed and the input variables of BOD computing model are selected: dissolved oxygen concentration (DO), effluent suspended solids concentration (SS), pH, chemical oxygen demand (COD). The output value of computing model is used to detect the BOD concentration.
(2) Computing model of the intelligent detection of BOD: establishing the computing model of BOD based on a self-organizing recurrent RBF neural network. The structure of recurrent RBF neural network includes three layers: input layer, hidden layer and output layer. The recurrent RBF neural network is 4-m-1, named the number of input layer is 4 and hidden neurons is m, Connection weights between input layer and hidden layer are assigned 1, the connection weights between hidden layer and output layer randomly assign values, the assignment interval is [-1, 1]. The number of the training sample is N and the input of self-organizing recurrent RBF neural network is x(t)=[x1(t), x2(t), x3(t), x4(t)] at time t. The expectation output of the neural network is expressed as yd(t) and the actual output is expressed as y(t), and the computing method of BOD can be described:
{circle around (1)} The input Layer: There are 4 neurons which represent the input variables in this layer. The output values of each neuron are as follows:
u
i(t)=xi(t); (Equation 1)
wherein ui(t) is the ith output value at time t, i=1, 2, . . . , 4, and the input vector is x(t)=[x1(t), x2(t), x4(t)].
{circle around (2)} The Hidden Layer: There are m neurons of hidden layer. The outputs of hidden neurons are:
cj(t) denotes the center vector of the jth hidden neuron and cj(t)=[c1j(t), c2j(t), . . . , c4j(t)]T at time t, ∥hj(t)−cj(t)∥ is the Euclidean distance between hj(t) and cj(t), and σj(t) is the radius or width of the jth hidden neuron at time t, hj(t) is input vector of the jth hidden neuron at time t described as:
h
j(t)=[u1(t), u2(t), . . . u4(t), vj(t)×y(t−1)]T, (Equation 3)
y(t−1) is the output value of the output layer at time t−1, vj(t) denotes the connection weight from output layer to the jth hidden neuron at time t, and v(t)=[v1(t), v2(t), . . . , vm(t)]T, T represents transpose;
{circle around (3)} The Output Layer: There is only one node in this layer, the output is:
wherein w(t)=[w1(t), w2(t), . . . , wm(t)]T is the connection weights between the hidden neurons and output neuron at time t, θ(t)=[θ1(t), θ2(t), . . . , θm(t)]T is the output vector of the hidden layer, and y(t) represents the output of recurrent RBF neural network at time t.
The error of self-organizing recurrent RBF neural network is:
yd(t) is the expectation output of neural network and the actual output is expressed as y(t);
(3) Training the self-organizing recurrent RBF neural network;
{circle around (1)} providing the self-organizing recurrent RBF neural network, the initial number of hidden layer neurons is m, m>2 is a positive integer. The input of self-organizing recurrent RBF neural network is x(1), x(2), . . . , x(t), . . . , x(N), correspondingly, the expectation output is yd(1), yd(2), . . . , yd(t), . . . , yd(N), expected error value is set to Ed, Ed∈(0, 0.01). The every variable of initial center value cj(1) ∈(−2, 2), width value σj(1) ∈(0, 1), initial feedback weight vj(1) ∈(0, 1), j=1, 2, . . . , m; initial weight w(1) ∈(0, 1);
{circle around (2)} Setting the learning step s=1;
{circle around (3)} t=s; according to Equations (1)-(4), calculating the output of the self-organizing recurrent RBF neural network by exploiting a fast gradient descent algorithm:
ηc, ησ, ηv, ηw are the learning rate of centre, width, feedback connection weight from output layer to hidden layer and the connection weight between hidden layer and output layer, respectively. In addition, ηc∈(0, 0.01], ησ∈(0, 0.01], ηv∈(0, 0.02], ηw ∈(0, 0.01]; cj(t+1)=[c1j(t+1), c2j (t+1), . . . , c4j (t+1)] denotes the center vector of the jth hidden neuron at time t+1; σj(t+1) is the radius or width of the jth hidden neuron at time t+1; vj(t+1) denotes the connection weight from output layer to the jth hidden neuron at time t+1; wj(t+1) is the connection weights between the hidden neurons and output neuron at time t+1;
{circle around (4)} t>3, calculating independent contribution:
wherein ψj(t) is the independent contribution of the jth hidden neuron at time t; qh(t−1) is independent contribution output of the jth hidden neuron at time t−1. qj(t) is independent contribution output of the jth hidden neuron at time t; Moreover, qi=[qj(t−1), qi(t)] is independent contribution output vector of the jth hidden neuron; Q(t)=[q1(t), . . . qm−1(t), qm(t)]T is the independent contribution matrix at time t,
Q(t)=Φ(t)Ω(t), (Equation 11)
wherein Ω(t) is a coefficients matrix which is provided as:
Ω(t)=D−1(t)Φ(t)b(t)z(t), (Equation 12)
wherein Φ(t)=[θ(t−1), θ(t)] is the output matrix of hidden layer at time t, θ(t−1)=[θ1(t−1), θ2(t−1), . . . , θm(t−1)]T is the output vector of hidden layer at time t−1, θ(t)=[θ1(t), θ2(t), . . . , θm(t)]T is the output vector of hidden layer at time t; D(t), B(t) and z(t) are the covariance matrix of Φ(t), the whitening matrix of y(t) and the whitening transformation matrix of y(t), respectively. D(t), B(t) and z(t) are provided as:
where
y(t)=Φ(t)δ(t), (Equation 16)
wherein δ(t) is the weight matrix of hidden layer to output layer
δ(t)=[w(t−1), w(t)], (Equation 17)
wherein w(t−1)=[w1(t−1), w2(t−1), . . . , wm(t−1)]T and w(t)=[w1(t), w2(t), . . . , wm(t)]T are the output of self-organizing recurrent RBF neural network, the output vector of the hidden layer and the weight vector at time t−1 and time t, respectively.
{circle around (5)} t>3, calculating activity degree of hidden neuron:
S
j=(t)=e−∥hj(t)−cj(t)∥, (Equation 18)
wherein Sj(t) is the activity degree of the jth hidden neuron at time t, j=1, 2, . . . m.
{circle around (6)} t>3, adjusting the structure of self-organizing recurrent RBF neural network:
In the process of adjusting structure of neural network, calculating the activity degree of the /th hidden neuron S/(t) and the independent contribution of the /th hidden neuron ψ/(t).
When the activity degree and independent contribution of the /th hidden neuron satisfy:
S
l(t)=max S(t), (Equation 19)
ψl(t)=max ψ(t), (Equation 20)
wherein S(t)=[S1(t), . . . , Sm−1(t), Sm(t)] is the vector of activity degree of hidden neurons at time t, ψ(t)=[ψ1(t), . . . , ψm−1(t), ψm(t)] is the vector of independent contribution of hidden neurons at time t, adding 1 hidden neuron and the number of hidden neurons is M1=m+1; Otherwise, the structure of self-organizing recurrent RBF neural network is not adjusted, M1=m;
When the activity degree and independent contribution of the ith hidden neuron satisfy:
S
i(t)=min S(t), (Equation 21)
ψi(t)=min ψ(t), (Equation 22)
deleting the ith hidden neuron and updating the number of hidden neurons M2=M1−1; otherwise the structure of self-organizing recurrent RBF neural network is not adjusted, M2=M1;
{circle around (7)} increasing one learning step s, if s<N, then turning to step{circle around (3)}; if s=N, turning to step{circle around (8)}.
{circle around (8)} according to Eq. (5), calculating the performance of self-organizing recurrent RBF neural network. If E(t)≧Ed, then turning to step {circle around (3)}; if E(t)<Ed, stopping the training process.
(4) BOD concentration prediction;
The testing samples are used as the input of self-organizing recurrent RBF neural network, and the output of neural network is the computed values of BOD concentration.
The novelties of this disclosure contain:
(1) In order to detect BOD concentrations online with acceptable accuracy, an intelligent detection method is developed in this disclosure. The results demonstrate that BOD trends in WWTP can be predicted with acceptable accuracy using DO, SS, pH data and COD as input variables. This intelligent detection method not only solves the problem of measured online for BOD concentrations with acceptable accuracy but also gets rid of the complicated process of developing new sensors and reduces the operation cost in WWTP.
(2) This intelligent detection method is based on the self-organizing recurrent RBF neural network by exploiting the activity degree and independent contribute of hidden neurons. The implementations of this disclosure may optimize both parameters and the network size during the learning process simultaneously. Accordingly, online measurement may be performed for detection of BOD concentrations with high measurement precision and strong adaptation for environment.
This disclosure utilizes four input variables in this intelligent detection method to predict the BOD concentration. In fact, it is in the scope of this disclosure that any of the variables: oxidation-reduction potential (ORP), DO, temperature, SS, pH, COD and total nitrogen (TN), may be used to predict effluent TP concentrations. Moreover, this intelligent detection method is also able to predict the others variables in urban WWTP.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items.
This disclosure takes suspended solids concentrations (SS), dissolved oxygen (DO), pH, chemical oxygen demand (COD) as characteristic variables for BOD, except for the pH (no unit), the unit of the above parameters is mg/L;
The experimental data comes from water quality analysis statement of a wastewater treatment plant in 2012; choosing data of SS concentrations, DO, pH and COD as experimental samples, after eliminating abnormal sample, 100 groups of data are available, and the group of 60 are used as training samples, and the remaining 40 groups are used as test samples.
This disclosure adopts the following technical scheme and implementation steps:
An intelligent detection method for the BOD concentration based on a self-organizing recurrent RBF neural network is described using the following operations.
(1) Determining the input and output variables of BOD: For sewage treatment process of activated sludge system, sewage treatment process variables are analyzed and select the input variables of BOD computing model: DO concentration, effluent SS concentration, pH, COD. The output value of computing model is the detected BOD concentration.
(2) Computing model of intelligent detection of BOD: establishing a computing model of BOD based on a self-organizing recurrent RBF neural network. The structure of recurrent RBF neural network may include three layers: input layer, hidden layer and output layer. The network is 4-m-1, named the number of input layer is 4 and hidden neurons is m. Connection weights between input layer and hidden layer are assigned one, the connection weights between hidden layer and output layer randomly assign values, the assignment interval is [1, 1]. The number of the training sample is N and the input of self-organizing recurrent RBF neural network is x(t)=[x1(t), x2(t), x3(t), x4(t)] at time t. The expectation output of neural network output is expressed as yd(t) and the actual output is expressed as y(t). Computing method of BOD can be described:
{circle around (1)} The input Layer: There are 4 neurons which represent the input variables in this layer. The output values of each neuron are as follows:
u
i(t)=xi(t); (Equation 23)
wherein ui(t) is the ith output value at time t, i=1, 2, . . . , 4, and the input vector is x(t)=[x1(t), x2(t), . . . x4(t)].
{circle around (2)} The Hidden Layer: There are m neurons of hidden layer. The outputs of hidden neurons are:
cj (t) denotes the center vector of the jth hidden neuron and cj(t)=[c1j(t), c2j(t), . . . , c4j(t)]T at time t, ∥hj(t)−cj(t)∥ is the Euclidean distance between hj(t) and cj(t), and σj(t) is the radius or width of the jth hidden neuron at time t, hj(t) is input vector of the jth hidden neuron at time t described as
h
j(t)=[u1(t), u2(t), . . . u4(t), vj(t)×y(t−1)]T, (Equation 25)
y(t−1) is the output value of the output layer at time t−1, vj(t) denotes the connection weight from output layer to the jth hidden neuron at time t, and v(t)=[v1(t), v2(t), . . . , vm(t)]T, T represents transpose;
{circle around (3)} The Output Layer: There is only one node in this layer, the output is:
wherein w(t)=[w1(t), w2(t), . . . , wm(t)]T is the connection weights between the hidden neurons and output neuron at time t, θ(t)=[θ1(t), θ2(t), . . . , θm(t)]T is the output vector of the hidden layer, y(t) represents the output of recurrent RBF neural network at time t.
The error of self-organizing recurrent RBF neural network is:
yd(t) is the expectation output of neural network and the actual output is expressed as y(t);
(3) Training the self-organizing recurrent RBF neural network;
{circle around (1)} Providing the self-organizing recurrent RBF neural network, the initial number of hidden layer neurons is m, and m>2 is a positive integer. The input of self-organizing recurrent RBF neural network is x(1), x(2), . . . , x(t), . . . , x(N), correspondingly, the expectation output is yd(1), yd(2), . . . , yd(t), . . . , yd(N), expected error value is set to Ed, Ed∈(0, 0.01). The every variable of initial centre value cj(1) ∈(−2, 2), width value σj(1) ∈(0, 1), initial feedback weight vj(1) ∈(0, 1), j=1, 2, . . . , m; initial weight w(1) ∈(0, 1);
{circle around (2)} Setting the learning step s=1;
{circle around (3)} t=s; According to Equations (1)-(4), calculating the output of self-organizing recurrent RBF neural network by exploiting a fast gradient descent algorithm:
ηc, ησ, ηv, ηw are the learning rate of center, width, feedback connection weight from output layer to hidden layer and the connection weight between hidden layer and output layer, respectively. In addition, ηc∈(0, 0.01], ησ∈(0, 0.01], ηv∈(0, 0.02], ηw∈(0, 0.01]; cj(t−1)=[c1j(t+1), c2j(t+1), c4j(t+1)] denotes the center vector of the jth hidden neuron at time t+1; σj(t+1) is the radius or width of the jth hidden neuron at time t+1; vj(t+1) denotes the connection weight from output layer to the jth hidden neuron at time t+1; wj(t+1) is the connection weights between the hidden neurons and output neuron at time t+1;
{circle around (4)} t>3, calculating independent contribution:
wherein ψj(t) is the independent contribution of the jth hidden neuron at time t; qj(−1) is independent contribution output of the jth hidden neuron at time t−1. qj(t) is independent contribution output of the jth hidden neuron at time t; Moreover, qj[qj(t−1), qj(t)] is independent contribution output vector of the jth hidden neuron; Q(t)=[q1(t), . . . qm−1(t), qm(t)]T is the independent contribution matrix at time t,
Q(t)=Φ(t)Ω(t), (Equation 33)
wherein Ω(t) is a coefficients matrix which is provided as:
Ω(t)=D−1(t)Φ(t)B(t)z(t), (Equation 34)
wherein Φ(t)=[θ(t−1), θ(t)] is output matrix of hidden layer at time t, θ(t−1)=[θ1(t−1), θ2(t−1), . . . , θm(t−1)]T is output vector of hidden layer at time t−1, θ(t)=[θ1(t), θ2(t), . . . , θm(t)]T is output vector of hidden layer at time t; D(t), B(t) and z(t) are the covariance matrix of Φ(t), the whitening matrix of y(t) and the whitening transformation matrix of y(t), respectively. D(t), B(t) and z(t) are provided as:
wherein
y(t)=Φ(t)δ(t), (Equation 38)
wherein δ(t) is the weight matrix of hidden layer to output layer,
δ(t)=[w(t−1),w(t)] (Equation 39)
wherein w(t−1)=[w1(t−1) , w2(t−1), . . . , wm(t−1)]T and w(t)=[w1(t) , w2(t), . . . , wm(t)]T are the output of self-organizing recurrent RBF neural network, the output vector of the hidden layer and the weight vector at time t−1 and time t, respectively.
{circle around (5)} t>3, calculating activity degree of hidden neuron:
S
j(t)=e−∥hj(t)−cj(t)∥, (Equation 40)
wherein Sj(t) is the activity degree of the jth hidden neuron at time t, j=1, 2, . . . m.
{circle around (6)} t>3, adjusting the structure of the self-organizing recurrent RBF neural network:
In the process of adjusting structure of neural network, calculating the activity degree of the lth hidden neuron S/(t) and the independent contribution of the /th hidden neuron ψ/)/(t).
When the activity degree and independent contribution of the lth hidden neuron satisfy:
S
l(t)=max S(t), (Equation 41)
ψl(t)=max ψ(t), (Equation 42)
wherein S(t)=[S1(t), . . . , Sm−1(t), Sm(t)] is the vector of activity degree of hidden neurons at time t, ψ(t)=[ψ1(t), . . . , ψm−1(t), ψm(t)] is the vector of independent contribution of hidden neurons at time t; adding one hidden neuron and the number of hidden neurons is M1=m+1; Otherwise, the structure of self-organizing recurrent RBF neural network is not adjusted, M1=m;
When the activity degree and independent contribution of the ith hidden neuron satisfy:
S
i(t)=min S(t), (Equation 43)
ψi(t)=min ψ(t), (Equation 44)
deleting the ith hidden neuron and updating the number of hidden neurons M2=M1−1; otherwise the structure of self-organizing recurrent RBF neural network is not adjusted, M2=M1;
{circle around (7)} increasing one learning steps, if s<N, then turning to step {circle around (3)}; if s=N, turning to step {circle around (8)}.
{circle around (8)} according to Eq. (5), calculating the performance of self-organizing recurrent RBF neural network. If E(t)≧Ed, then turning to step {circle around (3)}; if E(t)<Ed, stopping the training process.
The training result of the intelligent detection method for BOD concentration is shown in
(4) BOD concentration prediction;
The testing samples used as the input of self-organizing recurrent RBF neural network, the output of neural network is the computing values of BOD concentration. The predicting result is shown in
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts are disclosed as example forms of implementing the claims.
Number | Date | Country | Kind |
---|---|---|---|
201510999765.5 | Dec 2015 | CN | national |