The invention relates to a method and apparatus for detecting signal features of a measurement signal and in particular to a method and apparatus to provide unsupervised signal characterization using autoencoding.
When recording large amounts of data, in particular measurement data, either as a single, long acquisition or recording large amounts of short acquisitions (or anywhere in-between), it can be a very time-consuming task to process the recorded data to extract useful information about the measurement signal. Conventional signal analysis can use search functions or use a mask trigger on historic data in order to filter out particular properties of acquired waveforms of the measurement signal. One can generally detect predefined signal features such as signal integrity defects. Further, it is possible to rely on training data and knowledge about expected important features. These conventional signal analyzing methods require some form of a priori knowledge or input of what to look for in an acquired measurement signal. However, in many use cases such kind of a priori knowledge is not available. Accordingly, there is a need to provide a method and system which allows to detect signal features within a measurement signal without any prior knowledge about expected signal features.
The invention provides according to a first aspect a method for detecting signal features of a measurement signal comprising the steps of:
In a possible embodiment of the method according to the first aspect of the present invention, an encoded vector including a number of characteristic signal features of the measurement signal is derived from a middle layer of said trained autoencoder neural network.
In a further possible embodiment of the method according to the first aspect of the present invention, a dimension of an output encoded vector of said autoencoder neural network comprising a number of characteristic signal features derived from the middle layer of said autoencoder neural network is reduced to provide a vector in a feature space with lower dimension.
In a further possible embodiment of the method according to the first aspect of the present invention, the measurement signal is derived by a measurement apparatus from a device under test.
In a further possible embodiment of the method according to the first aspect of the present invention, characteristics of data sections of the measurement signal represented in the low dimensional feature space are displayed on a screen of a user interface of said measurement apparatus.
In a further possible embodiment of the method according to the first aspect of the present invention, data sections of the measurement signal comprising similar signal features in the low dimensional feature space are clustered to determine feature areas of similar data sections displayed on the screen of the user interface of said measurement apparatus.
In a further possible embodiment of the method according to the first aspect of the present invention, a first kind of label is assigned to each determined feature area.
In a still further possible embodiment of the method according to the first aspect of the present invention, a second kind of label is assigned to the data section of the measurement signal based on the at least one determined feature area.
In a further possible embodiment of the method according to the first aspect of the present invention, the measurement signal comprises an analog measurement signal acquired by a probe of a measurement apparatus digitized and stored in an acquisition memory of the measurement apparatus.
In a further possible embodiment of the method according to the first aspect of the present invention, the digital measurement signal stored in the acquisition memory of said measurement apparatus is segmented into data sections processed to determine a measurement parameter vector for each data section.
In a still further possible embodiment of the method according to the first aspect of the present invention, the autoencoder neural network is trained in an unsupervised machine learning process on the basis of applied measurement parameter vectors.
The invention further provides according to a second aspect a measurement apparatus comprising
In a possible embodiment of the measurement apparatus according to the second aspect of the present invention, the measurement apparatus comprises at least one probe to derive at least one analog measurement signal from a device under test applied to an analog to digital converter adapted to convert the analog measurement signal into a digital signal comprising data sections stored in the acquisition memory of said measurement apparatus.
In a possible embodiment of the measurement apparatus according to the second aspect of the present invention, the trained autoencoder neural network comprises a variational autoencoder neural network.
In a further possible embodiment of the measurement apparatus according to the second aspect of the present invention, the trained autoencoder neural network comprises an adversarial autoencoder neural network.
In a further possible embodiment of the measurement apparatus according to the second aspect of the present invention, the measurement apparatus comprises a signal analyzer, in particular an oscilloscope.
The invention further provides according to a further aspect a computer-implemented method for signal feature detection within at least one measurement signal,
The invention further provides according to a further aspect a computer-implemented software tool for signal feature detection in one or more measurement signals by processing separate data sections of each measurement signal to calculate associated measurement parameter vectors applied as input data to a trained autoencoder neural network to extract characteristic signal features of the measurement signals.
The invention provides according to a further aspect a computer program product which stores the computer-implemented software tool.
In the following, possible embodiments of the different aspects of the present invention are described in more detail with reference to the enclosed figures.
The autoencoder neural network 4 has been trained based on n measurement parameter vectors. In a possible embodiment, at least n data sections of the acquired measurement signal are stored in the acquisition memory 2, wherein n 2. A data section can comprise a segment or portion of a long acquisition waveform representing the measurement signal. In a possible embodiment, a set of n measurement parameter vectors, one for each data section, is determined and applied to the autoencoder neural network 4 for training purposes. The autoencoder neural network 4 can be trained in a training phase in a machine learning process.
After training and testing has been completed, the trained autoencoder neural network 4 of the measurement apparatus 1 is adapted to calculate an encoded vector h with characteristic signal features for each new acquired measurement signal measured by the measurement apparatus 1. Once the autoencoder neural network 4 has been trained, the function values of the middle layer can be determined for each member of an input set. This can be used for visualization of core parameters of the measurement signal or as input to a clustering algorithm for automatic grouping. In a possible embodiment, a reconstruction error can be used to train the internal parameters of the different layers of the autoencoder neural network 4 such that the system learns the best possible compressed representation (encoded vector at the middle layer) of an applied measurement parameter vector v. Autoencoding involves the training of the different layers of the autoencoder neural network 4 to compress an input and then reconstruct the original input from the compressed version in a most accurate way. The reconstruction essentially forces the autoencoder to learn a best possible compression or in other words to learn the most important features. Autoencoding is completely data-dependent and requires no labeling of the input data. In a training phase, the autoencoder neural network 4 does learn the main features of an applied input dataset learning its neural network parameters. For each member of the input set, the values of each of these features can be determined, e.g. by applying a compression stage to each member of the input set. The parameters can be either visualized (e.g. with a scatter plot) or clustering algorithms can be applied to automatically determine groups having similar parameters. Consequently, groups with similar parameters become automatically obvious, i.e. visible to a user. Further, also rare events within the measurement signal also become automatically evident to the operator.
In a preferred embodiment, a dimension of the encoded vector h output by the trained autoencoder neural network 4 and comprising a number of characteristic signal features is reduced automatically to provide a vector h′ in a feature space with lower dimension. This can be achieved in a possible embodiment by a t-distributed Stochastic Neighbor Embedding process (T-SNE). In a possible embodiment, n measurement parameter vectors v are applied to the trained autoencoding neural network 4 to determine k signal features per each data section. In a possible embodiment, the dimension of the k signal features are reduced to a 1 dimensional feature space (wherein 1<k). In a possible embodiment, the 1 dimensional feature space is displayed on a screen of a user interface 11 of the measurement apparatus 1 as also shown in
Characteristics of data sections of the measurement signal represented in the low dimensional feature space are displayed on the screen of the user interface 11 of the measurement apparatus 1, e.g. oscilloscope or signal analyzer. In a possible embodiment, the data sections of the measurement signal comprising similar signal features in a low dimensional feature space are clustered to determine feature areas of similar data sections displayed on the screen of the user interface 11 of the measurement apparatus 1. In a possible embodiment, a first kind of labels can be assigned for each determined feature area. Further, a second kind of label can be assigned to the data section of the measurement signal based on the at least one determined feature area. The trained autoencoder neural network 4 of the measurement apparatus 1 can be used to classify an input measurement signal derived from a device under test DUT. Accordingly, the autoencoder neural network 4 can be used to classify any new measurement signal for which features are not known which could lead to results in the 1 dimensional feature space being inside the known areas or outside. An outside case could mean that a new training of the autoencoder neural network 4 becomes necessary. In a possible embodiment, the measurement apparatus 1 as illustrated in
In a first step S1, at least two separate data sections of a measurement signal are provided.
In a further step S2, a measurement parameter vector v for each provided data section of the measurement signal is determined or calculated.
In a third step S3, the measurement parameter vectors v are processed as input data by a trained autoencoder neural network 4 to detect signal features of the measurement signal.
In a further step (not illustrated in
The method illustrated in
In a possible embodiment, the autoencoder neural network 4 can comprise a variational autoencoder neural network 4 as illustrated in
In a further possible embodiment, the autoencoder neural network 4 can comprise an adversarial autoencoder architecture as illustrated in
Different kinds of autoencoder neural networks 4 can be implemented in the measurement apparatus 1 for different applications.
In a further possible embodiment, the measurement apparatus 1 can switch between different types of autoencoder neural networks 4 trained for different purposes. These trained autoencoder neural networks 4 can be connected in parallel between the processing stage 9 and the processing stage 10 of the measurement apparatus 1 illustrated in
Number | Name | Date | Kind |
---|---|---|---|
7471724 | Lee | Dec 2008 | B2 |
7738121 | Spalding | Jun 2010 | B2 |
7812970 | Nygaard | Oct 2010 | B2 |
7933770 | Kruger | Apr 2011 | B2 |
8378608 | Robertson | Feb 2013 | B2 |
8396546 | Hirata | Mar 2013 | B2 |
10303971 | Geva et al. | May 2019 | B2 |
10460213 | Kang et al. | Oct 2019 | B2 |
10664754 | Gotou | May 2020 | B2 |
20160358074 | Latapie et al. | Dec 2016 | A1 |
20170046616 | Socher et al. | Feb 2017 | A1 |
20170068888 | Chung et al. | Mar 2017 | A1 |
20170328194 | Liu et al. | Nov 2017 | A1 |
20180053085 | Matsumoto et al. | Feb 2018 | A1 |
20180137941 | Chen | May 2018 | A1 |
20180150746 | Tu et al. | May 2018 | A1 |
20180157937 | Kang et al. | Jun 2018 | A1 |
20180164757 | Matsushima et al. | Jun 2018 | A1 |
20180247200 | Rolfe | Aug 2018 | A1 |
20180247201 | Liu et al. | Aug 2018 | A1 |
20180253640 | Goudarzi et al. | Sep 2018 | A1 |
20180268297 | Okazaki et al. | Sep 2018 | A1 |
20180276817 | Isgum et al. | Sep 2018 | A1 |
20180300609 | Krishnamurthy et al. | Oct 2018 | A1 |
20180314917 | Mehr et al. | Nov 2018 | A1 |
20180322394 | Nguyen et al. | Nov 2018 | A1 |
20180322537 | Driemeyer et al. | Nov 2018 | A1 |
20180328967 | Lange et al. | Nov 2018 | A1 |
20180374089 | Duboue | Dec 2018 | A1 |
20190012581 | Honkala et al. | Jan 2019 | A1 |
20190012774 | Arai | Jan 2019 | A1 |
20190026631 | Carr et al. | Jan 2019 | A1 |
20190034802 | Harshangi et al. | Jan 2019 | A1 |
20190034803 | Gotou | Jan 2019 | A1 |
20190050395 | Min | Feb 2019 | A1 |
20190073443 | Frey et al. | Mar 2019 | A1 |
20190073594 | Eriksson et al. | Mar 2019 | A1 |
20190095798 | Baker | Mar 2019 | A1 |
20190124045 | Zong et al. | Apr 2019 | A1 |
20200034670 | Kang et al. | Jan 2020 | A1 |
20200090042 | Wayne et al. | Mar 2020 | A1 |
Number | Date | Country |
---|---|---|
3022907 | Nov 2017 | CA |
108737406 | Nov 2018 | CA |
104361328 | Feb 2015 | CN |
104834941 | Aug 2015 | CN |
104866727 | Aug 2015 | CN |
106203624 | Dec 2016 | CN |
108062572 | May 2018 | CN |
108256629 | Jul 2018 | CN |
108304927 | Jul 2018 | CN |
108347764 | Jul 2018 | CN |
108460391 | Aug 2018 | CN |
108491431 | Sep 2018 | CN |
102018005652 | Jan 2019 | DE |
3396603 | Oct 2018 | EP |
201710877 | Aug 2017 | GB |
101836096 | Mar 2018 | KR |
20180063869 | Jun 2018 | KR |
2016194248 | Dec 2016 | WO |
2017031356 | Feb 2017 | WO |
2017125980 | Jul 2017 | WO |
2017133188 | Aug 2017 | WO |
2018204781 | Nov 2018 | WO |
2018211140 | Nov 2018 | WO |
2018220368 | Dec 2018 | WO |
Entry |
---|
Wen Tingxi et al., “Deep Convolution Neural Network and Autoencoders-Based Unsupervised Feature Learning of EEG Signals”, IEEE Access Journal, DOI: 10.1109/ACCESS.2018.2833746, vol. 6, dated May 7, 2018, 12 pages. |
Number | Date | Country | |
---|---|---|---|
20210034918 A1 | Feb 2021 | US |