This application claims the benefit of Taiwan application Serial No. 107142078, filed Nov. 26, 2018, the subject matter of which is incorporated herein by reference.
The invention relates in general to a model building device and a loading disaggregation system, and more particularly to a model building device and a loading disaggregation system for analyzing users' electricity usage behavior.
With the development of technology, electricity demand is rapidly growing. To reduce the consumption of resources, power saving has been an important issue. For the public, the total electricity consumption could be inferred only from the electricity bill. If someone wants to know the electricity consumption of individual electrical appliances, each electrical appliance requires a respective smart meter. However, the smart meters are quite expensive, and it is impractical for the public to install smart meters on each electrical appliance.
The invention is directed to a model building device and a loading disaggregation system. The loading disaggregation system includes a data processing device, a model building device and a model evaluation device. The loading disaggregation system operates in a model building mode M1 and a model application mode M2 in sequence. In the model building mode M1, the model building device repetitively trains and tests the disaggregation model. After the model evaluation device verifies and confirms that the parameters of the disaggregation model have been well set up, the loading disaggregation system enters the model application mode M2. The model building device can disaggregate the aggregated data outputted from the total electricity meter to generate disaggregation results revealing the electricity consumption of individual electrical appliances.
According to a first aspect of the present invention, a model building device for disaggregating the aggregated data measured during a unit processing period and outputted from the total electricity meter is provided.
According to a second aspect of the present invention, a loading disaggregation system is provided.
The above and other aspects of the invention will become better understood with regard to the following detailed description of the preferred but non-limiting embodiment(s). The following description is made with reference to the accompanying drawings.
To realize the electricity usage of household electrical appliances, the present invention provides a disaggregation model which can estimate the electricity usage (electricity consumption) behavior of a designated electrical appliance according to raw data outputted from a total electricity meter. The loading disaggregation system proposed in the embodiments according to the present invention takes advantage of nonintrusive load monitoring (NILM) technology. The disaggregation model can be applied to a large number of residences or electricity users to analyze the electricity usage behavior of the electrical appliances at a low cost.
In the specification, the loading disaggregation system operates in one of two modes, that is, model building mode M1 and model application mode M2. The electricity users involved in the model building mode M1 are defined as original electricity users, and the electricity users involved in the model application mode M2 are defined as ordinary electricity users. In the model building mode M1, the loading disaggregation system receives the raw data in connection with the original electricity users to determine and set the parameters of the disaggregation model. In the model application mode M2, the loading disaggregation system uses the disaggregation model with well fitted parameters to disaggregate the raw data in connection with the ordinary electricity users to acquire the electricity usage behavior. For illustration purposes only, four electrical appliances (electrical appliances A, B, C, and D) are exemplified in the specification to describe how the disaggregation model disaggregates the raw data to acquire the electricity usage behavior of the electrical appliances. The parameters related to the electrical appliances A, B, C, and D are marked with subscript characters a, b, c and d, respectively.
In practice, the sort and the quantity of the electrical appliances are not limited. For the ordinary electricity users, only total electricity meters are required to measure the total electricity consumption (represented by the relation between time and power (watt)), so that the cost is relatively low and the method is relatively convenient and easy. In the specification, the parameters related to the total electricity meters are marked with subscript character m. After building the disaggregation model, the electricity providers can provide the service of electricity usage analysis for lots of consumers by utilizing this disaggregation model.
The following description together with
Please refer to
Please refer to
In the model building mode M1, the loading disaggregation system 10 enables the data processing device 101, the model building device 103 and the model evaluation device 105. In the model application mode M2, the loading disaggregation system 10 enables the data processing device 101 and the model building device 103, and disables the model evaluation device 105. It is to be noted that when the loading disaggregation system 10 sets up the parameters of the disaggregation model, the operation includes the repetitive setting steps performed by the model building device 103, the repetitive validation steps performed by the model evaluation device 105, and the repetitive updating steps performed by the model building device 103, in order to find the proper parameters. Therefore, the model building mode M1 of the loading disaggregation system 10 further includes a training stage STG1 and a testing stage STG2. The loading disaggregation system 10 may operate at the training stage STG1 and the testing stage STG2 alternately.
The parameters of the disaggregation model may include structure parameters and auxiliary parameters. The structure parameters are fundamental parameters for processing data in the disaggregation model, and also called hyperparameters. The auxiliary parameters are coefficients (weights) for analyzing data in the disaggregation model.
When the loading disaggregation system 10 is at the training stage STG1, the model building device 103 updates the auxiliary parameters based on backward propagation according to an evaluation result outputted from the model evaluation device 105. If the evaluation result outputted from model evaluation device 105 indicates that the disaggregation model cannot achieve convergence, the model building device 103 will update the learning rate of the structure parameters. On the other hand, when the loading disaggregation system 10 is at the testing stage STG2, the model building device 103 adjusts the structure parameters according to the evaluation result outputted from the model evaluation device 105. The details of the structure parameters and the auxiliary parameters are described below.
Depending on the stage of the loading disaggregation system 10, the loading disaggregation system 10 further divides the raw data rDAT in connection with the original electricity users into two portions wherein one is raw training data, and the other is raw testing data. The raw training data is used at the training stage STG1 to update the auxiliary parameters of the disaggregation model. The raw testing data is used at the testing stage STG2 to inspect whether the structure parameters and the auxiliary parameters of the disaggregation model are viable for the testing dataset tstDSET.
In practice, the raw training data and the raw testing data about the electricity usage may correspond to the same original electricity users but with different measurement periods Tdet. Otherwise, the raw training data and the raw testing data about the electricity usage may correspond to different original electricity users with the same measurement period Tdet. The definition of the original electricity users and the related raw training data and raw testing data is not limited to the embodiments and can be adjusted as needed.
Please refer to
At first, the loading disaggregation system 10 is at the training stage STG1. At this time, the model building device 103 initializes the structure parameters and the auxiliary parameters of the loading disaggregation system 10 (step S111). Then, the loading disaggregation system 10 uses the raw training data to train and evaluate the disaggregation model until that the evaluation result indicates that the disaggregation model can generate a convergent disaggregation result (step S113). In this step, the model building device 103 updates the auxiliary parameters and/or the learning rate of the structure parameters. Subsequently, the loading disaggregation system 10 terminates the training stage STG1 and enters the testing stage STG2.
At the testing stage STG2, the loading disaggregation system 10 uses the raw testing data to test the disaggregation model. The model evaluation device 105 determines the accuracy of the synthesized simulation data doutPa, doutPb, doutPc and doutPd (step S121). After obtaining the accuracy of the synthesized simulation data doutPa, doutPb, doutPc, and doutPd, the loading disaggregation system 10 determines whether the accuracy reaches a predetermined accuracy threshold (step S123).
If it is determined that the accuracy does not reach the predetermined accuracy threshold in step S123, it reveals that the setting of the auxiliary parameters used in the model building device 103 cannot make the disaggregation model to viably disaggregate the raw testing data. In this situation, the loading disaggregation system 10 resets the structure parameters of the disaggregation model to rebuild the disaggregation model (step S125). Then, the loading disaggregation system 10 enters the training stage STG1 again and uses the reset structure parameters. Subsequently, the loading disaggregation system 10 repeats step S113.
Otherwise, if it is determined that the accuracy reaches the predetermined accuracy threshold in step S123, it reveals that the model evaluation device 105 judges that the comparison result meets the accuracy requirement at the testing stage STG2. Subsequently, the loading disaggregation system 10 terminates the model building mode M1 and enters the model application mode M2.
Then, the loading disaggregation system 10 receives the total raw data from the total electricity meters in connection with the ordinary electricity users 14, and generates synthesized simulation data doutPa, doutPb, doutPc and doutPd corresponding to the electrical appliances A, B, C and D of the ordinary electricity users 14 according to the total raw data (step S131). The synthesized simulation data in this specification represents the simulated characteristic waveform which reveals the power consumption of a specific electrical appliance recorded by the respective electricity meter. Afterward, the loading disaggregation system 10 transmits the synthesized simulation data doutPa, doutPb, doutPc and doutPd corresponding to the electrical appliances A, B, C and D of the ordinary electricity users 14 to the data-gathering and analyzing device 15. Then, the data-gathering and analyzing device 15 gathers and analyzes the synthesized simulation data doutPa, doutPb, doutPc and doutPd corresponding to the electrical appliances A, B, C and D of the ordinary electricity users 14 (step S141).
The principle of acquiring and processing the raw data by the data processing device 101 is substantially the same in any condition, irrespective of the stage and mode of the loading disaggregation system 10 and the composition of the raw data (that is, including the respective raw data rDATa, rDATb, rDATc, and rDATd or not).
Please refer to
Please refer to
As described above, the loading disaggregation system 10 may operate in one of three conditions, the training stage STG1 in the model building mode M1, the testing stage STG2 in the model building mode M1 and the model application mode M2. Three sets of arrows are shown in the drawing to indicate the procedure that the data processing device 101 processes the raw data rDAT.
Firstly, the first set of arrows indicate the training stage STG1 in the model building mode M1. As shown by the arrow direction, after the data processing device 101 receives the recorded data from the training electricity users, the data processing device 101 processes the recorded meter waveforms. The total meter waveforms Wm and the respective appliance meter waveforms Wa, Wb, Wc, and Wd, are processed with the data processing module 1011, the data balance module 1013 and the data augmentation module 1015 to generate augmented data augDAT for the training dataset trnDSET.
Secondly, the second set of arrows indicate the testing stage STG2 in the model building mode M1. After the data processing device 101 receives the total meter waveforms Wm and the respective appliance meter waveforms Wa, Wb, Wc and Wd in connection with the testing electricity users, the waveforms are processed only with the data processing module 1011 to generate preprocessed data ppDAT for the testing dataset tstDSET.
Thirdly, the third set of arrows indicate the model application mode M2. The data processing device 101 only receives the total meter waveforms Wm. The data processing module 1011 processes the total meter waveforms Wm to generate the preprocessed data ppDAT for the ordinary dataset nmDSET.
Compared with the training stage STG1, the data at the testing stage STG2 or in the model application mode M2 will not pass through the data balance module 1013 and the data augmentation module 1015. In other words, the data processing module 1011 is required in all of the three conditions. Therefore, the present disclosure only describes how the data processing device 101 processes the meter waveforms when the loading disaggregation system 10 operates at the training stage STG1, but the principle can be applied to other two conditions.
At first, the data sampling module 1011a defines a sampling cycle Tsmp (for example, one minute). The data sampling module 1011a samples each meter waveform at intervals of the sampling cycle Tsmp. Each sampling time point is defined as a timestamp. Considering the total meter waveform Wm in connection with one electricity user during one-year measurement period Tdet, 525,600 (60*24*365=525,600) entries of total sampling data smpDATm and 525,600 entries of each respective sampling data smpDATa, smpDATb, smpDATc, and smpDATd can be obtained.
Because the one-year measurement period Tdet is quite long, the one-year measurement period Tdet is further divided into time units (that is, unit processing period Tunit) with fixed time length. In this embodiment, it is assumed that the unit processing period Tunit is one hour. Therefore, there are 8,760 (24*365=8,760) unit processing periods Tunit in one-year measurement period Tdet. Each unit processing period Tunit includes 60 timestamps, and the 60 timestamps correspond to 60 entries of sampling data smpDAT, respectively.
Furthermore, the data processing module 1011 can preprocess the sampling data smpDAT obtained in each unit processing period Tunit, such as normalization and noise filtering, to generate total preprocessed data ppDATm and respective preprocessed data ppDATa, ppDATb, ppDATc, and ppDATd.
Furthermore, in the embodiment, both the total raw data and the respective raw data in connection with one electricity user are obtained in one unit processing period Tunit and collected as one set of data. For one electricity user, if the measurement period Tdet is set as one year and the unit processing period Tunit is set as one hour, the total meter waveform Wm and the respective meter waveforms Wa, Wb, Wc, and Wd correspond to 8,760 (24*365=8,760) sets of raw data rDAT. The data sampling module 1011a samples the 8,760 sets of raw data rDAT to generate 8,760 sets of sampling data smpDAT. Then, the preprocessing module 1011b processes the 8,760 sets of sampling data smpDAT to generate 8,760 sets of preprocessed data ppDAT.
The data sampling module 1011a and the preprocessing module 1011b samples and preprocesses the raw data rDAT in units of the unit processing period Tunit. Accordingly, the number of the data entries inputted into the data sampling module 1011a and the preprocessing module 1011b equals the number of the data entries outputted from the data sampling module 1011a and the preprocessing module 1011b.
For illustration purposes, 1,000 training original users are assigned for the training stage STG1 in the model building mode M1, 1,000 testing original users are assigned for the testing stage STG2 in the model building mode M1, and 50,000 ordinary users are assigned for model application mode M2 in the embodiment. In addition, as defined above, the measurement period Tdet is set as one year, the unit processing period Tunit is set as one hour and the sampling cycle Tsmp is set as one minute. According to the setting, the contents of the training dataset trnDSET, the testing dataset tstDSET and the ordinary dataset nmDSET are described below.
At the training stage STG1, the preprocessing module 1011b outputs 8,760,000 sets (8,760*1000=8,760,000) of preprocessed data ppDAT (including data related to the total electricity meters and the respective electricity meters corresponding to the electrical appliances A, B, C, and D). These preprocessed data ppDAT are processed through the data balance module 1013 and the data augmentation module 1015 before being provided for the training dataset trnDSET.
At the testing stage STG2, the preprocessing module 1011b outputs 8,760,000 sets (8,760*1000=8,760,000) of preprocessed data ppDAT (including data related to the total electricity meters and the respective electricity meters corresponding to the electrical appliances A, B, C, and D). These preprocessed data ppDAT are provided for the testing dataset tstDSET. In the model application mode M2, the preprocessing module 1011b outputs 438,000,000 sets (8,760*50,000=438,000,000) of preprocessed data ppDAT (only including data related to the total electricity meters). These preprocessed data ppDATm are provided for the ordinary dataset nmDSET.
At the training stage STG1, the preprocessed data ppDAT should be further processed by the data balance module 1013 and the data augmentation module 1015. After referring to the usage behavior of the electrical appliances A, B, C and D during the unit processing period Tunit, the data balance module 1013 performs bootstrapping on the total preprocessed data ppDATm, the respective preprocessed data ppDATa, ppDATb, ppDATc and ppDATd to generate total balanced data blDATm and respective balanced data blDATa, blDATb, blDATc and blDATd. The data augmentation module 1015 augments the total balanced data blDATm and the respective balanced data blDATa, blDATb, blDATc and blDATd according to at least one data augmentation rule to generate total augmented data augDATm and respective augmented data augDATa, augDATb, augDATc, and augDATd. Afterwards, the total augmented data augDATm, and the respective augmented data augDATa, augDATb, augDATc and augDATd generated by the data augmentation module 1015 are provided for the training dataset trnDSET.
In brief, the data balance module 1013 and the data augmentation module 1015 can provide greater diversity of data for the model building device 103 at the training stage STG1 so that the disaggregation model can provide generalization effect. Therefore, the number of entries of the total balanced data blDATm is greater than the number of entries of the total preprocessed data ppDATm, and the number of entries of the total augmented data augDATm is greater than the number of entries of the total balanced data blDATm.
The data balance module 1013 balances the data and the data augmentation module 1015 augments the data in units of the unit processing period Tunit. Therefore, after the balance procedure, the number of entries of the total balanced data blDATm is equal to the number of entries of the respective balanced data blDATa, blDATb, blDATc or blDATd. After the augmentation procedure, the number of entries of the total augmented data augDATm is equal to the number of entries of the respective augmented data augDATa, augDATb, augDATc or augDATd.
The description of how the data balance module 1013 and the data augmentation module 1015 increase the data entries in the training dataset trnDSET is given below. In practice, the data balance module 1013 and the data augmentation module 1015 can increase the data entries in the training dataset trnDSET through other know schemes and the preset invention does not limit the schemes to the embodiments given in the specification.
In the embodiment, the data balance module 1013 may determine the work time of respective electrical appliances A, B, C and D in one day according to the predefined conditions. Then, the data balance module 1013 raises the ratio of the data related to the electrical appliance(s) with shorter work time in the training dataset trnDSET.
Because of various factors, the work times and usage behavior of the same kind of electrical appliance may be different for different electricity users, and the work times and usage behavior of one electrical appliance for same electricity user may be different in different seasons. Hence, the present invention further determines the work times of respective electrical appliances and raises the data amount of the less-used electrical appliance. For example, a power threshold Pth and a time threshold are defined for each electrical appliance A, B, C or D to be compared with the measuring result of the respective electrical appliances A, B, C and D. If the measuring result reaches the power threshold Pth and the time threshold for a specific electrical appliance A, B, C or D, it indicates that the specific electrical appliance A, B, C or D is actually used in the unit processing period Tunit, and the work time of the specific electrical appliance A, B, C or D is obtained. The probability (ratio) of the data related to the less-used electricity appliance in the training dataset trnDSET is purposely raised. Because of the newly added data, the number of entries of the balanced data blDAT generated by the data balance module 1013 is greater than the number of entries of the preprocessed data ppDAT.
In the embodiment, the data augmentation module 1015 augments the total balanced data blDATm and the respective balanced data blDATa, blDATb, blDATc and blDATd according to at least one data augmentation method. For example, the data augmentation module 1015 may adopt the data augmentation methods such as truncation, adding noise, signal synthesis and shifting to augment the balanced data blDAT. It is to be noted that the data augmentation is not limited to these methods.
The data augmentation method is applied to all of the balanced data blDAT so that the number of entries of the augmented data increases proportionally. For example, there are X sets of balanced data blDAT and five data augmentation methods are used. Thus, additional 5X sets of data are obtained from the five data augmentation methods. Therefore, the data augmentation module 1015 outputs 6X sets of augmented data augDAT.
In practice, the data augmentation module 1015 can significantly increase the quantity of the augmented data augDAT in the training dataset trnDSET. Furthermore, one data augmentation method can provide several augmented data by parameters. For example, the parameters may include truncation time, noise amplitude, time for adding noise, methods of signal synthesis and shifting ratio.
As described above, the data outputted by the data processing device 101 in one unit processing period Tunit are collected as one set of data. Among each set of data outputted by the data processing device 101, the data related to the total electricity meters will be sent to the model building device 103 and defined as total aggregated data ainPm, whereas the data related to the electrical appliances A, B, C and D will be sent to the model evaluation device 105 and defined as respective verification data dvrPa, dvrPb, dvrPc and dvrPd.
As described above, in the embodiment, the loading disaggregation system 10 performs different operation in different modes. In the model building mode M1, the model building device 103 of the loading disaggregation system 10 updates the parameters repetitively. After the model evaluation device 105 confirms that the accuracy of the disaggregation model reaches the predetermined accuracy threshold at the testing stage STG2, the loading disaggregation system 10 enters the model application mode M2 and only uses the disaggregation model afterwards.
Please refer to
The internal elements of the data processing device 101 and their connection are described first. The data processing device 101 includes the data processing module 1011, the data balance module 1013, the data augmentation module 1015 and the data receiving module 1017. The data receiving module 1017 is in communication with the network 11 and receives the raw data rDAT corresponding to the electricity users through the network 11. In the data processing device 101, the data receiving module 1017 is electrically connected to the data sampling module 1011a. The preprocessing module 1011b is electrically connected to the data sampling module 1011a, the data balance module 1013, the model building device 103 and the model evaluation device 105. The data augmentation module 1015 is electrically connected to the data balance module 1013, the model building device 103 and the model evaluation device 105.
Then, the internal elements of the model building device 103 and their connection are described. The model building device 103 includes a training batch-determination module 1033, a usage pattern-analyzing module 1035, an information mapping module 1037, an encoded data-reshaping module 1038, a time series-analyzing module 1039, a structure parameter-updating module 1031a and an auxiliary parameter-updating module 1031b.
The training batch-determination module 1033 is electrically connected to the data augmentation module 1015, the usage pattern-analyzing module 1035 and the model evaluation device 105. The training batch-determination module 1033 divides the sets of augmented data augDAT into batches wherein each set of augmented data augDAT includes the total augmented data augDATm and the respective augmented data augDATa, augDATb, augDATc, and augDATd. The way of dividing the augmented data augDAT into batches does not change the number of sets of the augmented data augDAT.
The usage pattern-analyzing module 1035 is electrically connected to the training batch-determination module 1033 and the preprocessing module 1011b. The information mapping module 1037 is electrically connected to the usage pattern-analyzing module 1035 and the encoded data-reshaping module 1038. The time series-analyzing module 1039 is electrically connected to the encoded data-reshaping module 1038 and the model evaluation device 105. The details about processing the data with the usage pattern-analyzing module 1035, the information mapping module 1037 and the time series-analyzing module 1039 will be further described with reference to
In brief, the usage pattern-analyzing module 1035 receives the total aggregated data ainPm, analyzes the total aggregated data ainPm based on the detection conditions, and generates usage pattern information (for example, time-frequency usage pattern information or edge usage pattern information). The information mapping module 1037 maps the usage pattern information to form encoded data having multiple mapping dimensions correspondingly. The time series-analyzing module 1039 analyzes the correlation of the encoded data corresponding to each timestamp to generate synthesized simulation data doutPa, doutPb, doutPc and doutPd with a time length of the unit processing period Tunit. Each of the usage pattern-analyzing module 1035, the information mapping module 1037 and the time series-analyzing module 1039 operates in the same way through every mode and every stage of the loading disaggregation system 10.
As described above, the internal control parameters in the loading disaggregation system 10 include the structure parameters and the auxiliary parameters. The structure parameters include, for example, sampling cycle Tsmp (for example, one minute), unit processing period Tunit (for example, one hour), time-frequency range of the time-frequency detector (receptive field size of convolutional neural network (CNN) which is related to the work time of the electrical appliance), power threshold Pth, time threshold, measurement period Tdet, number of electricity users, and batch size for training the disaggregation model. The auxiliary parameters include, for example, weight used in the usage pattern-analyzing module 1035, the information mapping module 1037, the encoded data-reshaping module 1038 and the time series-analyzing module 1039. The structure parameter-updating module 1031a is configured to set the structure parameters, and the auxiliary parameter-updating module 1031b is configured to update the auxiliary parameters. In the loading disaggregation system 10, the auxiliary parameters are repetitively updated at the training stage STG1, while the structure parameters are adjusted only when it is determined that the accuracy of the disaggregation model does not reach the predetermined accuracy threshold at the testing stage STG2. Therefore, the frequency for updating the structure parameters is lower than that of the auxiliary parameters.
At the training stage STG1, the auxiliary parameter-updating module 1031b receives a verification result (that is, comparison result indicating the similarity between the synthesized simulation data doutPa, doutPb, doutPc and doutPd and the respective verification data dvrPa, dvrPb, dvrPc, and dvrPd) from the model evaluation device 105. Furthermore, the auxiliary parameter-updating module 1031b sends control signals to the usage pattern-analyzing module 1035, the information mapping module 1037, the encoded data-reshaping module 1038 and the time series-analyzing module 1039 to update the auxiliary parameters according to the verification result. In addition, the structure parameter-updating module 1031a also adjusts the learning rate of the structure parameters at the training stage STG1.
At the testing stage STG2, the structure parameter-updating module 1031a receives the verification result from the model evaluation device 105. If the verification result does not reach the predetermined accuracy threshold, it reveals that the similarity between the synthesized simulation data doutPa, doutPb, doutPc and doutPd and the respective verification data dvrPa, dvrPb, dvrPc, and dvrPd is not satisfied. Therefore, the model building device 103 should adjust the structure parameters and rebuild/retrain the disaggregation model. In this condition, the structure parameter-updating module 1031a adjusts the structure parameters of the data sampling module 1011a, the preprocessing module 1011b, the data balance module 1013, the data augmentation module 1015, the training batch-determination module 1033, the usage pattern-analyzing module 1035, the information mapping module 1037, the encoded data-reshaping module 1038 and the time series-analyzing module 1039. On the contrary, if the verification result reaches the predetermined accuracy threshold, it reveals that the similarity between the synthesized simulation data doutPa, doutPb, doutPc and doutPd and the respective verification data dvrPa, dvrPb, dvrPc, and dvrPd is satisfied. Thus, the model building device 103 completes parameter setting of the disaggregation model.
The model evaluation device 105 includes disaggregation evaluation modules 1051 electrically connected between the time series-analyzing module 1039 and the data processing device 101. The disaggregation evaluation module 1051 corresponding to the electrical appliance A receives the synthesized simulation data doutPa and the respective verification data dvrPa from the time series-analyzing module 1039 and the data processing device 101, respectively, and compares both to obtain the similarity between them. Afterwards, the comparison result about the similarity is sent to the auxiliary parameter-updating module 1031b. The operation of other disaggregation evaluation modules 1051 corresponding to the electrical appliances B, C and D can be derived from the above description, and the repetitious details are not given herein.
When the loading disaggregation system 10 is at the training stage STG1, the disaggregation evaluation modules 1051 receive the respective augmented data augDATa, augDATb, augDATc and augDATd from the training batch-determination module 1033, and receive the synthesized simulation data doutPa, doutPb, doutPc and doutPd from the time series-analyzing module 1039. Afterwards, the disaggregation evaluation modules 1051 compare the respective augmented data augDATa, augDATb, augDATc and augDATd received from the training batch-determination module 1033 with the synthesized simulation data doutPa, doutPb, doutPc and doutPd received from the time series-analyzing module 1039 to calculate the loss functions.
When the loading disaggregation system 10 is at the testing stage STG2, the disaggregation evaluation modules 1051 receive the respective preprocessed data ppDATa, ppDATb, ppDATc and ppDATd from the preprocessing module 1011b, and receive the synthesized simulation data doutPa, doutPb, doutPc and doutPd from the time series-analyzing module 1039. Afterwards, the disaggregation evaluation modules 1051 compare the respective preprocessed data ppDATa, ppDATb, ppDATc and ppDATd received from the preprocessing module 1011b with the synthesized simulation data doutPa, doutPb, doutPc and doutPd received from the time series-analyzing module 1039 to calculate the accuracy. The disaggregation evaluation modules 1051 may uses any known criterion to judge the similarity to meet different requirements and application conditions.
Please refer to
Please refer to
The model building device 103 receives the total aggregated data ainPm and generates the synthesized simulation data doutPa, doutPb, doutPc, and doutPd according to the total aggregated data ainPm. Then, the synthesized simulation data doutPa, doutPb, doutPc, and doutPd are transmitted to the model evaluation device 105. The model evaluation device 105 compares the synthesized simulation data doutPa, doutPb, doutPc, and doutPd with the respective verification data dvrPa, dvrPb, dvrPc and dvrPd in a one-to-one manner (that is, doutPa vs. dvrPa, doutPb vs. dvrPb, doutPc vs. dvrPc and doutPd vs. dvrPd). Then the comparison results about the similarity cmp(doutPa, dvrPa), cmp(doutPb, dvrPb), cmp(doutPc, dvrPc) and cmp(doutPd, dvrPd) are transmitted to the model building device 103 for adjusting the parameters.
Please refer to
After the model building device 103 generates the synthesized simulation data doutPa, doutPb, doutPc and doutPd according to the total aggregated data ainPm, the model evaluation device 105 takes the j-th set of the respective augmented data augDATa, augDATb, augDATc and augDATd in the i-th batch in the training dataset trnDSET as the respective verification data dvrPa, dvrPb, dvrPc and dvrPd, which are then compared with the synthesized simulation data doutPa, doutPb, doutPc and doutPd generated according to the total aggregated data ainPm, to calculate the loss functions (step S307). Subsequently, it is determined whether the value of j is equal to the value of J (step S308). In other words, the loading disaggregation system 10 determines whether all of the respective verification data dvrPa, dvrPb, dvrPc and dvrPd in the current batch have been completely compared with the synthesized simulation data doutPa, doutPb, doutPc, and doutPd. If the decision in step S308 is “no,” the respective counter (j counter) is incremented by one (step S315) and goes back to step S305.
If the decision in step S308 is “yes,” it represents that all of the J sets of data in the current batch have been used for the verification. At this time, the auxiliary parameter-updating module 1031b updates the auxiliary parameters according to twenty sets of calculated loss functions in the i-th batch (step S309). Afterwards, it is determined whether the value of i is equal to the value of I (step S311). In other words, the loading disaggregation system 10 determines whether all of the respective verification data dvrPa, dvrPb, dvrPc and dvrPd in all batches have been completely compared with the synthesized simulation data doutPa, doutPb, doutPc, and doutPd.
If the decision in step S311 is “no,” the batch counter (i counter) is incremented by one and the respective counter (j counter) is initialized (step S317). Then, the operation goes back to step S305. If the decision in step S311 is “yes,” the loading disaggregation system 10 determines whether the comparison results achieve convergence (step S312). If the operation goes from step S303 to step S312 successfully, it represents that all the augmented data in the training dataset trnDSET have been inputted to the disaggregation model.
If the decision in step S312 is “yes,” the operation ends. If the decision in step S312 is “no,” the structure parameter-updating module 1031a will execute step S303 again after adjusting the structure parameters (for example, learning rate) (step S314). With the progress of training the disaggregation model, the disaggregation effect becomes better and better. The present invention gradually decreases the learning rate according to the Iteration number to decrease the change in the parameters. Besides, the data processing device 101 need not repetitively generate the training dataset trnDSET. Only the model building device 103 needs repetitively execute the training stage STG1, and the model evaluation device 105 needs repetitively evaluate the disaggregation model.
Please refer to
The usage pattern-analyzing module 1035 includes a time-frequency detection module 1035a and an edge detection module 1035b, both adopting CNN architecture. Although the time-frequency detection module 1035a and the edge detection module 1035b receive the same total aggregated data ainPm, they focus on different patterns of the total aggregated data ainPm. The time-frequency detection module 1035a uses the time-frequency information to analyze the total aggregated data ainPm to evaluate the temporal profiles of usage of the electrical appliances A, B, C and D. Furthermore, while powering on or powering off one or more electrical appliances A, B, C, and D, positive edge triggering or negative edge triggering occurs on the total aggregated data ainPm. Accordingly, the edge detection module 1035b detects the positive edge triggering and the negative edge triggering from the total aggregated data ainPm.
The time-frequency detection module 1035a includes time-frequency detectors 1034a, time-frequency detectors 1034b, time-frequency detectors 1034c and time-frequency detectors 1034d corresponding to the electrical appliances A, B, C, and D, respectively.
The quantities of the time-frequency detectors 1034a, 1034b, 1034c and 1034d corresponding to individual electrical appliances A, B, C, and D may be or may be not equal to each other. For example, there are x1 time-frequency detectors 1034a (FLTa[1]˜FLTa[x1]) corresponding to the electrical appliance A; there are x2 time-frequency detectors 1034b (FLTb[1]˜FLTb[x2]) corresponding to the electrical appliance B; there are x3 time-frequency detectors 1034c (FLTc[1]˜FLTc[x3]) corresponding to the electrical appliance C; and there are x4 time-frequency detectors 1034d (FLTd[1]˜FLTd[x4]) corresponding to the electrical appliance D.
The detection results of the time-frequency detectors 1034a, 1034b, 1034c and 1034d may be affected by the receptive field size of CNN, the time threshold and the power threshold Pth defined for the work time of respective electrical appliances, the total meter waveforms Wm and the respective meter waveforms Wa, Wb, Wc, and Wd. Even the time-frequency detectors 1034a corresponding to the same electrical appliance A may use different time-frequency filter parameters.
The time-frequency detectors corresponding to the same electrical appliance do not generate exactly the same detection results after analyzing the same total aggregated data ainPm because of different time-frequency filter parameters. For example, the time-frequency detector FLTa[1] analyzes the total aggregated data ainPm and generates Na1 entries of time-frequency usage pattern information Dta1 corresponding to the electrical appliance A, and the time-frequency detector FLTa[x1] analyzes the total aggregated data ainPm and generates Nax1 entries of time-frequency usage pattern information Dtax1 corresponding to the electrical appliance A, wherein Na1 may be equal to or different from Nax1. Accordingly, the time-frequency detectors FLTa[1]˜FLTa[x1] corresponding to the electrical appliance A will generate (Na1+Na2+Na3 . . . Nax1) entries of time-frequency usage pattern information DTa1˜DTax1 corresponding to the electrical appliance A. Similarly, the time-frequency detectors corresponding to the electrical appliances B, C and D analyze the total aggregated data ainPm and generate different entries (Nb1˜Nbx2, Nc1˜Ncx3, Nd1˜Ndx4) of time-frequency usage pattern information (DTb1˜DTbx2, DTc1˜DTcx3, DTd1˜DTdx4).
The time-frequency detectors 1034a, 1034b, 1034c and 1034d can set the corresponding time-frequency parameters, but the quantity and values of the time-frequency parameters are not limited. For example, the parameters for time-frequency filter may include the size of receptive field Tscan, length of time domain parameter Tflt for usage duration of at least one electrical appliance, and power threshold Pth of at least one electrical appliance. The time-frequency detectors 1034a, 1034b, 1034c and 1034d can detect over the total aggregated data ainPm for all timestamps or fewer timestamps by setting the stride parameter.
Please refer to
Please refer to
The edge detector 1036 centers each timestamp t1˜t60 with the predetermined receptive field Tscan to analyze the total aggregated data ainPm. The edge detector 1036 performs detection for the timestamps t1˜t60 in turn, and generates the edge usage pattern information. In this diagram, the length of the receptive field Tscan of the edge detector 1036 is three. For example, when the edge detector 1036 centers the timestamp t2, the adjacent timestamps t1 and t3 are covered by the edge detector 1036. The edge detectors 1036 may have different detection conditions to detect the edge usage pattern information of the total aggregated data ainPm, including positive edge triggering DEp, negative edge triggering DEn and steady state DEs. After the edge detector 1036 performs the detection for the timestamps t1˜t60 to analyze the total aggregated data ainPm, it is realized that Np timestamps, Nn timestamps, and Ns timestamps are positive edge triggered, negative edge triggered and steady, respectively.
Since each edge detector 1036 performs detection for sixty timestamps in one unit processing period Tunit, each edge detector 1036 generates sixty entries of edge usage pattern information (Np+Nn+Ns=60). The edge detectors 1036 have different detection conditions wherein the receptive field Tscan is adjustable. Therefore, when different edge detectors 1036 perform detection for the same timestamp of the total aggregated data ainPm, the detection results are not exactly the same.
In the usage pattern-analyzing module 1035, the quantity of the time-frequency detectors in one time-frequency detection module 1035a and the quantity of the edge detectors 1036 in one edge detection module 1035b are structure parameters, and are fixed during the training stage STG1. On the other hand, the detection conditions of the time-frequency detectors and the edge detectors are auxiliary parameters, and can be updated by the auxiliary parameter-updating module 1031b during the training stage STG1.
As described above, the time-frequency detection module 1035a generates (Na1+Na2+Na3 . . . Nax1) entries of time-frequency usage pattern information corresponding to the electrical appliance A, (Nb1+Nb2+Nb3 . . . Nbx2) entries of time-frequency usage pattern information corresponding to the electrical appliance B, (Nc1+Nc2+Nc3 . . . Ncx3) entries of time-frequency usage pattern information corresponding to the electrical appliance C and (Nd1+Nd2+Nd3 . . . Ndx4) entries of time-frequency usage pattern information corresponding to the electrical appliance D. On the other hand, the number of entries of the edge usage pattern information generated by the edge detection module 1035b is sixty times of the number of the edge detectors 1036. As a result, the number of the data entries inputted into the information mapping module 1037 is equal to (Na1+ . . . +Nax1)+(Nb1+ . . . +Nbx2)+(Nc1+ . . . +Ncx3)+(Nd1+ . . . Ndx4)+60*(the number of edge detectors).
According to the embodiment of the present invention, the information mapping module 1037 adopts the fully connected layer in deep neural network (DNN) architecture. The time-frequency usage pattern information and the edge usage pattern information serve as input neurons of the information mapping module 1037. The information mapping module 1037 maps (embeds) the input neurons to multiple mapping dimensions. Each mapping dimension undergoes dimensionality reduction to form an encoded data group. The number of the mapping dimensions should be a multiple of the number of timestamps included in one unit processing period Tunit. For example, if the unit processing period Tunit is one hour and the interval between the timestamps is one minute, the number of the mapping dimensions is a multiple of sixty. In addition, the number of the mapping dimensions is less than the total number of the entries of the time-frequency usage pattern information and the edge usage pattern information.
Please refer to
For the information mapping module 1037, the number of input neurons, the number of output neurons, the presence of internal neurons or not, the number of layer for internal neurons and the number of the internal neurons are structure parameters. On the other hand, the weighs of connections between the input/output/internal neurons are auxiliary parameters which can be updated at the training stage STG1. The output neurons 41b of the information mapping module 1037 are considered as encoded data which are then reshaped by the encoded data-reshaping module 1038. For example, the encoded data are reshaped to form sixty timestamp encoded data groups corresponding to the sixty timestamps.
In the embodiment, the time series-analyzing module 1039 has bidirectional long short-term memory (BLSTM) architecture and includes a correlation analyzing module 1039b, a basis waveform-generating module 1039c and a data synthesizing module 1039d. The correlation analyzing module 1039b and the basis waveform-generating module 1039c do not process data related to respective electrical appliances and can be considered as shared layers. On the other hand, the data synthesizing module 1039d respectively processes the data corresponding to the electrical appliances A, B, C, and D, and can be considered as the branch layer.
Please refer to
The correlation analyzing module 1039b uses long short-term memory (LSTM) neurons 90p and 90f to process the encoded data 71b included in each timestamp encoded data group 81 to generate two first-layer time series (first-layer past-time series 901p and first-layer future-time series 901f).
Taking the timestamp t1 as an example, the correlation analyzing module 1039b takes the timestamp encoded data group 81 corresponding to the timestamp t1 as input data for the first-layer past LSTM neuron LSTM1p(t1) and the first-layer future LSTM neuron LSTM1f(t1) corresponding to the timestamp t1. The input data for the first-layer past/future LSTM neurons corresponding to other timestamps are obtained in a similar way, and are not repeated herein.
In this diagram, the first-layer past LSTM neurons LSTM1p(t1)˜LSTM1p(t60) collectively form the first-layer past-time series 901p, and the first-layer future LSTM neurons LSTM1f(t1)˜LSTM1f(t60) collectively form the first-layer future-time series 901f. Then, the correlation analyzing module 1039b uses the first-layer past LSTM neurons LSTM1p(t1)˜LSTM1p(t60) to generate Nr1 past-time correlation sequences 94p, and uses the first-layer future LSTM neurons LSTM1f(t1)˜LSTM1f(t60) to generate Nr1 future-time correlation sequences 94f.
Afterwards, the correlation analyzing module 1039b arranges sixty past-time correlation groups 921, 922, 923 . . . 92x side by side in the sequence of the timestamps t1˜t60. The columns enclosed with dotted frame represent the past-time correlation groups. Each of the past-time correlation groups 921, 922, 923 . . . 92x consists of Nr1 past-time correlation data 91p. One entry of past-time correlation data 91p is selected from each past-time correlation group 921, 922, 923 . . . 92x to form one past-time correlation sequence 94p, so as to generate Nr1 past-time correlation sequences 94p in total. The lower left portion of
The correlation analyzing module 1039b uses the first-layer future LSTM neurons LSTM1f(t1)˜LSTM1f(t60) to generate sixty future-time correlation groups 931, 932, 933 . . . 93x corresponding the timestamps t1˜t60. Each of the future-time correlation groups 931, 932, 933 . . . 93x consists of Nr1 future-time correlation data 91f. The columns enclosed with dotted frame represent the future-time correlation groups.
The sixty future-time correlation groups 931, 932, 933 . . . 93x are arranged side by side in the sequence of the timestamps t1˜t60. Then, one entry of future-time correlation data 91f is selected from each future-time correlation group 931, 932, 933 . . . 93x to form one future-time correlation sequences 94f, so as to generate Nr1 future-time correlation sequences 94f in total. The lower right portion of
The correlation analyzing module 1039b transmits the Nr1 past-time correlation sequences 94p and the Nr1 future-time correlation sequences 94f to the basis waveform-generating module 1039c as the input of the basis waveform-generating module 1039c.
Please refer to
The basis waveform-generating module 1039c arranges the past-time correlation data 91p and the future-time correlation data 91f in the sequence of the timestamps t1˜t60, and then defines time correlation groups 941, 942, 943 . . . 94x in the sequence of the timestamps t1˜t60. Afterwards, the basis waveform-generating module 1039c analyzes and processes the time correlation groups 941, 942, 943 . . . 94x by BLSTM to generate Nr2 past-time waveforms and Nr2 future-time waveforms. In this embodiment, the past-time waveforms and the future-time waveforms are collectively called common basis waveforms. Therefore, the data synthesizing module 1039d retrieves Nr2*2 common basis waveforms from the basis waveform-generating module 1039c.
For illustration purposes only, six common basis waveforms 991, 992, 993, 994, 995 and 996 are shown in
Similarly, the waveform selecting module 1038b and the waveform synthesizing module 1036b can generate the synthesized simulation data doutPb; the waveform selecting module 1038c and the waveform synthesizing module 1036c can generate the synthesized simulation data doutPc, and the waveform selecting module 1038d and the waveform synthesizing module 1036d can generate the synthesized simulation data doutPd.
The method with reference to
According to the concept of the present invention, at first, the loading disaggregation system 10 builds the disaggregation model in the model building mode M1. After the disaggregation model is built, the disaggregation model needs only the total raw data outputted from the total electricity meter of the ordinary electricity user to generate the synthesized simulation data of respective electrical appliances of the ordinary user. In other words, respective electrical meters are not required to install to houses of the ordinary users, and it could accurately estimate the usage behavior of specific electrical appliance. The analysis method can be applied to a large number of residences or electricity users to analyze the electricity usage behavior of the electrical appliances at low cost by collecting total raw data outputted form the total electricity meters in connection with the ordinary electricity users.
While the invention has been described by way of example and in terms of the preferred embodiment(s), it is to be understood that the invention is not limited thereto. On the contrary, it is intended to cover various modifications and similar arrangements and procedures, and the scope of the appended claims therefore should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements and procedures.
Number | Date | Country | Kind |
---|---|---|---|
107142078 | Nov 2018 | TW | national |
Number | Name | Date | Kind |
---|---|---|---|
9104189 | Berges Gonzalez | Aug 2015 | B2 |
9190844 | Tran | Nov 2015 | B2 |
20120290230 | Berges Gonzalez | Nov 2012 | A1 |
Number | Date | Country |
---|---|---|
WO-2013145779 | Oct 2013 | WO |
Entry |
---|
Kelly, Daniel. Disaggregation of domestic smart meter energy data. Diss. Imperial College London (Year: 2016). |
Liang, Jian, et al. “Load signature study—Part II: Disaggregation framework, simulation, and applications.” IEEE Transactions on Power Delivery 25.2 (Year: 2009). |
Filippi, Alessio, et al. “Multi-appliance power disaggregation: An approach to energy monitoring.” 2010 IEEE International Energy Conference. IEEE (Year: 2010). |
Wang et al., Nonintrusive load monitoring based on deep learning. In International Workshop on Data Analytics for Renewable Energy Integration (Year: 2018). |
Number | Date | Country | |
---|---|---|---|
20200167644 A1 | May 2020 | US |