The present invention relates to the field of neurotechnology, and particularly to a method and a system for matching music files with an electroencephalogram.
There are numerous symbol sequences with abundant information in the natural world, such as human languages created by human, sound tone (e.g. music) and noises defined by human, and gene sequences and neural signaling forming in natural processes. Different kinds of sequences may be matched to each other according to some common elements.
Music is one of the artistic forms which express human emotions in a most direct way, and has a very important influence and promotion impact on human emotions and its transformation. The research on mechanism in human brain concerning with emotional response caused by music is becoming one of the hotspots in many fields such as the cognitive neuroscience, pedagogy and psychology. As verified by prior researches, music can affect human's emotions, which could be observed by electroencephalogram analysis. Furthermore, different types of music and different stimulations methods could cause different excitement modes on human brains. Therefore, how these lead to emotions, and further guide the music chosen to relieve stress and get relax become very important for clinic.
For now researches concentrate on different influences on human brains caused by different types of music. However, the major disadvantage of prior art is selecting music roughly and subjectively. More specifically, prior art is not able to automatically select music files to help people achieve a desired brain statement such as relax according to their real-time brain waves.
The technical problem to be solved by the present invention is that prior art is not able to automatically find music files matching with human brain statements in real time then guide people relieve stress and relax effectively.
In view of this, in the first aspect, the present invention provides a method for matching music files with an electroencephalogram. In step S1, a scaling index α is obtained in accordance with a measured electroencephalogram. In step S2, each music file in a preset music library is analyzed to obtain a long-range correlation index β. In step S3, a music file matching with the electroencephalogram is searched out in accordance with the comparison of scaling index α and the long-range correlation index β.
Preferably, the step S1 may comprise following steps.
In step S11, the measured electroencephalogram is digitized to obtain a discrete-time signal sequence {xi, i=1, 2, . . . , N}, wherein xi is the ith sampling point of the electroencephalogram and N is the sampling size.
In step S12, the average amplitude x of the discrete-time signal sequence {xi, i=1, 2, . . . , N} is filtered to obtain a sequence {yi, i=1, 2, . . . , N}, wherein yi is defined by the following formula.
wherein,
In step S13, the EMD (Empirical Mode Decomposition) is applied to the sequence {yi, i=1, 2, . . . , N} to obtain n intrinsic mode functions IMF and a remainder R, wherein n is a positive integer determined by the EMD.
In step S14, peak-peak intervals (the number of data points between each neighboring local maximum) in each intrinsic mode function IMF are calculated.
In step S15, waveforms between peaks with peak-peak intervals within a first given range S are merged into a new waveform Pvalues(k), wherein 10(m-1)≤s≤10m, m=1, 2, . . . , mmax, and mmax is determined by the length N of the sequence {yi, i=1, 2, . . . , N}, and k represents each data point of the merged waveform, wherein k=1, 2, . . . , kmax, and kmax is determined by the sum of all the peak-peak intervals within the first given range S.
In step S16, a root mean square of each merged waveform is calculated to obtain a wave function F.
Wherein,
and <S> represents calculating an average in range S. With respect to different scale ranges S, F∝sα, wherein ∝ represents a directly proportional or scale relation between two subjects, and α is the scaling index.
In step S17, scaling index α is obtained in accordance with F∝sα.
Preferably, the step S2 further comprises following steps.
In step S21, each music file in the music library is digitized to obtain a digital music signal sequence {Ui, i=1, 2, . . . , M}, wherein i is the ith time point of the digital music signal sequence, and M is the length of the digital music signal sequence.
In step S22, a sequence {vj, j=1, 2, . . . , M/(presetlength)} is obtained by dividing the digital music signal sequence {Ui, i=1, 2, . . . , M} into multiple sub-sequences with a preset length and calculating the standard deviation of each sub-sequence, wherein vj is the jth data of the sequence {vj, j=1, 2, . . . , M/(presetlength)}.
In step S23, an average intensity sequence {(vj)2, j=1, 2, . . . , M/(presetlength)} is obtained in accordance with the sequence {vj, j=1, 2, . . . , M/(presetlength)}.
In step S24, a fluctuation sequence {zb, b=1, 2, . . . , M/(presetlength)} which is a one-dimensional random walk sequence is obtained in accordance with the average intensity sequence {(vj)2, j=1, 2, . . . , M/(presetlength)}, wherein zb is the bth data of the sequence {zb, b=1, 2, . . . , M/(presetlength)} which is defined by the following formula.
wherein
In step S25, multiple sub-sequences are obtained by shifting a time window with preset width along the fluctuation sequence {zb, b=1, 2, . . . , M/(presetlength)}. Each two neighboring windows exist a fixed overlap length τ.
In step S26, a linear trend {circumflex over (z)}b of each sub-sequence is obtained by mean of linear regression.
In step S27, a detrended fluctuation function FD(presetwidth)=√{square root over (<(δz)2>)} is obtained in accordance with the sequence {zb, b=1, 2, . . . , M/(presetlength)} and the linear trend of each sub-sequence, wherein δz={circumflex over (z)}b−zb, and √{square root over (<(δz)2>)} represents calculating the average of (δz)2 in each time window.
In step S28, the long-range correlation index β is obtained in accordance with the detrended fluctuation function FD(presetwidth) by means of following formula:
wherein,
represents the relation between detrended fluctuation function and the time scaling defined by the preset width of the time window derived from log-log plot.
Preferably, the step S3 further comprises following steps.
In step S31, γ is calculated in accordance with the scaling index α and the long-range correlation index β, wherein, γ=|α−β|.
In step S32, if γ is within a second given range, the music file with a long-range correlation index β is matched with the electroencephalogram with a scaling index α.
In a second aspect, the present invention provides a system for matching music files with an electroencephalogram which comprises an electroencephalogram scaling device, a music analysis device and a matching device. The electroencephalogram scaling device is configured to obtain a scaling index α in accordance with a measured electroencephalogram and to transmit the scaling index α to the matching device. The music analysis device is configured to analyze each music file in a preset music library to obtain a long-range correlation index β and to transmit β to the matching device. The matching device is configured to search out a music file matching with the electroencephalogram in accordance with the comparison of the scaling index α and the long-range correlation index β.
The system further comprises an electroencephalogram measuring device configured to measure an electroencephalogram and to transmit the electroencephalogram to the electroencephalogram scaling device.
Preferably, the electroencephalogram scaling device is configured to implement following steps.
In step S11, the measured electroencephalogram is digitized to obtain a discrete-time signal sequence {xi, i=1, 2, . . . , N}, wherein xi is the ith sampling point of the electroencephalogram and N is the sampling size.
In step S12, the average amplitude x of the discrete-time signal sequence {xi, i=1, 2, . . . , N} is filtered to obtain a sequence {yi, i=1, 2, . . . , N}, wherein yi is defined by the following formula.
wherein,
In step S13, the EMD (Empirical Mode Decomposition) is applied to the sequence {yi, i=1, 2, . . . , N} to obtain n intrinsic mode functions IMF and a remainder R, wherein n is a positive integer determined by the EMD.
In step S14, peak-peak intervals (the number of data points between each neighboring local maximum) in each intrinsic mode function IMF are calculated.
In step S15, waveforms between peaks with peak-peak intervals within a first given range S are merged into a new waveform Pvalues(k), wherein 10(m-1)≤s≤10m, m=1, 2, . . . , mmax, and mmax is determined by the length N of the sequence {yi, i=1, 2, . . . , N}, and k represents each data point of the merged waveform, wherein k=1, 2, . . . , kmax, and kmax is determined by the sum of all the peak-peak intervals within the first given range S.
In step S16, root mean square of each merged waveform is calculated to obtain a wave function F.
wherein,
and <S> represents calculating an average in the given range S. With respect to different scale ranges S, F∝sα, wherein ∝ represents a directly proportional or scale relation between two subjects, and α is the scaling index.
In step S17, scaling index α is obtained in accordance with F∝sα.
Preferably, the music analysis device is configured to implement following steps.
In step S21, each music file in the music library is digitized to obtain a digital music signal sequence {Ui, i=1, 2, . . . , M}, wherein i is the ith time point of the digital music signal sequence, and M is the length of the digital music signal sequence.
In step S22, a sequence {vj, j=1, 2, . . . , M/(presetlength)} is obtained by dividing the digital music signal sequence {Ui, i=1, 2, . . . , M} into multiple sub-sequences with a preset length and calculating the standard deviation of each sub-sequence, wherein vj is the jth data of the sequence {vj, j=1, 2, . . . , M/(presetlength)}.
In step S23, an average intensity sequence {(vj)2, j=1, 2, . . . , M/(presetlength)} is obtained in accordance with the sequence {vj, j=1, 2, . . . , M/(presetlength)}.
In step S24, a fluctuation sequence {zb, b=1, 2, . . . , M/(presetlength)} which is a one-dimensional random walk sequence is obtained in accordance with the average intensity sequence {(vj)2, j=1, 2, . . . , M/(presetlength)}, wherein zb is the bth data of the sequence {zb, b=1, 2, . . . , M/(presetlength)} which is defined by the following formula.
wherein
In step S25, multiple sub-sequences are obtained by shifting a time window with preset width along the fluctuation sequence {zb, b=1, 2, . . . , M/(presetlength)} {zb, b=1, 2, . . . , M/(presetlength)}. Each two neighboring windows exist a fixed overlap length τ.
In step S26, a linear trend {circumflex over (z)}b of each sub-sequence is obtained by mean of linear regression.
In step S27, a detrended fluctuation function FD(presetwidth)=√{square root over (<(δz)2>)} is obtained in accordance with the sequence {zb, b=1, 2, . . . , M/(presetlength)} and the linear trend of each sub-sequence, wherein δz=zb−{circumflex over (z)}b, and <(δz)2> represents calculating the average of (δz)2 in each time window.
In step S28, the long-range correlation index β is obtained in accordance with the detrended fluctuation function FD(presetwidth) by means of following formula:
wherein,
represents the relation between detrended fluctuation function and the time scaling defined by the preset width of the time window derived from log-log plot.
Preferably, the matching device is configured to implement following steps.
In step S31. γ is calculated in accordance with the scaling index α and the long-range correlation index β, wherein, γ=|α−β|.
In step S32, if γ is within a second given range, the music file with a long-range correlation index β is matched with the electroencephalogram with a scaling index α.
The method and system for matching music files with an electroencephalogram in accordance with the present invention may select corresponding music files in accordance with different electroencephalogram. In other words, the method and system in accordance with the present invention may automatically find music files matching with human brain statements in real time then guide people relieve stress and relax effectively.
For better understanding to the objects, subject matter and advantages of the embodiments in accordance with the present invention, reference will now be made in detail to particular embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. Obviously, the embodiments to be introduced below should not be construed to be restrictive to the scope, but illustrative only. Those skilled in the art would understand that other embodiments obtained in accordance with the spirit of the present invention without exercising inventive skills also fall into the scope of the present invention.
A method for matching music files with an electroencephalogram is disclosed by the present embodiment which comprises following steps as illustrated in
In step S1, a scaling index α is obtained in accordance with a measured electroencephalogram. In step S2, each music file in a preset music library is analyzed to obtain a long-range correlation index β. In step S3, a music file matching with the electroencephalogram is searched out in accordance with the comparison of the scaling index α and the long-range correlation index β.
Furthermore, the step S1 may comprise following steps which is not illustrated in
In step S11, the measured electroencephalogram is digitized to obtain a discrete-time signal sequence {xi, i=1, 2, . . . , N}, wherein xi is the ith sampling point of the electroencephalogram and N is the sampling size.
In step S12, the average amplitude x of the discrete-time signal sequence {xi, i=1, 2, . . . , N} is filtered to obtain a sequence {yi, i=1, 2, . . . , N}, wherein yi is defined by the following formula.
wherein,
In step S13, the EMD (Empirical Mode Decomposition) is applied to the sequence {yi, i=1, 2, . . . , N} to obtain n intrinsic mode functions IMF and a remainder R, wherein n is a positive integer determined by the EMD.
In step S14, peak-peak intervals (the number of data points between each neighboring local maximum) in each intrinsic mode function IMF are calculated.
In step S15, waveforms between peaks with peak-peak intervals within a first given range S is merged into a new waveform Pvalues(k), wherein 10(m-1)≤s≤10m, m=1, 2, . . . , mmax, and mmax is determined by the length N of the sequence {yi, i=1, 2, . . . , N}, and k represents each data point of the merged waveform, wherein k=1, 2, . . . , kmax and kmax is determined by the sum of all the peak-peak intervals within the first preset range S.
In step S16, a root mean square of each merged waveform is calculated to obtain a wave function F.
wherein,
and <S> represents calculating an average in range S. With respect to different scale ranges S, F∝sα, wherein ∝ directly proportional or scale relation between two subjects, and α is the scaling index.
In step S17, scaling index α is obtained in accordance with F∝sα.
Furthermore, the step S2 may comprise following steps which is not illustrated in
In step S21, each music file in the music library is digitized to obtain a digital music signal sequence {Ui, i=1, 2, . . . , M}, wherein i is the ith time point of the digital music signal sequence, and M is the length of the digital music signal sequence.
In step S22, a sequence {vj, j=1, 2, . . . , M/(presetlength)} is obtained by dividing the digital music signal sequence {Ui, i=1, 2, . . . , M} into multiple sub-sequences with a preset length and calculating the standard deviation of each sub-sequence, wherein vj is the jth data of the sequence {vj, j=1, 2, . . . , M/(presetlength)}.
In step S23, an average intensity sequence {(vj)2, j=1, 2, . . . , M/(presetlength)} is obtained in accordance with the sequence {vj, j=1, 2, . . . , M/(presetlength)}.
In step S24, a fluctuation sequence {zb, b=1, 2, . . . , M/(presetlength)} which is a one-dimensional random walk sequence is obtained in accordance with the average intensity sequence {(vj)2, j=1, 2, . . . , M/(presetlength)}, wherein zb is the bth data of the sequence {zb, b=1, 2, . . . , M/(presetlength)} which is defined by the following formula.
wherein
In step S25, multiple sub-sequences are obtained by shifting a time window with preset width along the fluctuation sequence {zb, b=1, 2, . . . , M/(presetlength)}. Each two neighboring windows exist a fixed overlap length τ.
In step S26, a linear trend {circumflex over (z)}b of each sub-sequence is obtained by mean of linear regression, wherein, {circumflex over (z)}b=a+cr, and a and c are determined by linear regression, and the multiple sub-sequences correspond to multiple {circumflex over (z)}b, a and c in each {circumflex over (z)}b=a+cr may be different.
In step S27, a detrended fluctuation function FD(presetwidth)=√{square root over (<(δz)2>)} is obtained in accordance with the sequence {zb, b=1, 2, . . . , M/(presetlength)} and the linear trend of each sub-sequence, wherein δz=zb−{circumflex over (z)}b, and <(δz)2> represents calculating the average of (δz)2 in each time window.
In step S28, the long-range correlation index β is obtained in accordance with the detrended fluctuation function FD(presetwidth) by means of following formula:
wherein,
represents the relation between detrended fluctuation function and the time scaling defined by the preset width of the time window derived from log-log plot.
The step S3 may comprise following steps which is not illustrated in
In step S31. γ is calculated in accordance with the scaling index α and the long-range correlation index β, wherein, γ=|α−β|.
In step S32, if γ is with a second given range, the music file with a long-range correlation index β is matched with the electroencephalogram with a scaling index α.
The method in accordance with the present embodiment compares the scaling index α of an electroencephalogram with the long-range correlation index β, and matches music file with the electroencephalogram if the scaling index and the long-range correlation index close to equal so as to find a music file matching with a measured electroencephalogram automatically. The method in accordance with the present embodiment may find music files match with human brain state automatically in real time by measuring an electroencephalogram then guide people relieve stress and relax effectively.
A system for matching music files with an electroencephalogram is disclosed by the present embodiment which comprises an electroencephalogram scaling device, a music analysis device and a matching device as illustrated in
The electroencephalogram scaling device is configured to obtain a scaling index α in accordance with a measured electroencephalogram and to transmit the scaling index α to the matching device. The music analysis device is configured to analyze each music file in a preset music library to obtain a long-range correlation index β and to transmit β to the matching device. The matching device is configured to search out a music file matching with the electroencephalogram in accordance with the comparison of the scaling index α and the long-range correlation index β.
The system further comprises an electroencephalogram measuring device not illustrated in
In a preferable embodiment, the electroencephalogram scaling device is configured to implement following steps.
In step S11, the measured electroencephalogram is digitized to obtain a discrete-time signal sequence {xi, i=1, 2, . . . , N}, wherein xi is the ith sampling point of the electroencephalogram and N is the sampling size.
In step S12, the average amplitude x of the discrete-time signal sequence {xi, i=1, 2, . . . , N} is filtered to obtain a sequence {yi, i=1, 2, . . . , N}, wherein yi is defined by the following formula.
wherein
In step S13, the EMD (Empirical Mode Decomposition) is applied to the sequence {yi, i=1, 2, . . . , N} to obtain n intrinsic mode functions IMF and a remainder R, wherein n is a positive integer determined by the EMD.
In step S14, peak-peak intervals (the number of data points between each neighboring local maximum) in each intrinsic mode function IMF are calculated.
In step S15, waveforms between peaks with peak-peak intervals with length within a first given range S are merged into a new waveform Pvalues(k), wherein 10(m-1)≤s≤10m, m=1, 2, . . . , mmax, and mmax is determined by the length N of the sequence {yi, i=1, 2, . . . , N}, and k represents each data point of the merged waveform, wherein k=1, 2, . . . , kmax and kmax is determined by the sum of all the peak-peak intervals within the first given range S.
In step S16, root mean square of each merged waveform is calculated to obtain a wave function F.
wherein,
and <S> represents calculating an average in range S. With respect to different scale ranges S, F∝sα, wherein ∝ represents a directly proportional or scale relation between two subjects, and α is the scaling index.
In step S17, scaling index α is obtained in accordance with F∝sα.
More specifically, the music analysis device is configured to implement following steps.
In step S21, each music file in the music library is digitized to obtain a digital music signal sequence {Ui, i=1, 2, . . . , M}, wherein i is the ith time point of the digital music signal sequence, and M is the length of the digital music signal sequence.
In step S22, a sequence {vj, j=1, 2, . . . , M/(presetlength)} is obtained by dividing the digital music signal sequence {Ui, i=1, 2, . . . , M} into multiple sub-sequences with a preset length and calculating the standard deviation of each sub-sequence, wherein vj is the jth data of the sequence {vj, j=1, 2, . . . , M/(presetlength)}.
In step S23, an average intensity sequence {(vj)2, j=1, 2, . . . , M/(presetlength)} is obtained in accordance with the sequence {vj, j=1, 2, . . . , M/(presetlength)}.
In step S24, a fluctuation sequence {zb, b=1, 2, . . . , M/(presetlength)} which is a one-dimensional random walk sequence is obtained in accordance with the average intensity sequence {(vj)2, j=1, 2, . . . , M/(presetlength)}, wherein zb is the bth data of the sequence {zb, b=1, 2, . . . , M/(presetlength)} which is defined by the following formula.
wherein
In step S25, multiple sub-sequences are obtained by shifting a time window with preset width along the fluctuation sequence {zb, b=1, 2, . . . , M/(presetlength)}. Each two neighboring windows exist a fixed overlap length τ.
In step S26, a linear trend {circumflex over (z)}b of each sub-sequence is obtained by mean of linear regression.
In step S27, a detrended fluctuation function FD(presetwidth)=√{square root over (<(δz)2>)} is obtained in accordance with the sequence {zb, b=1, 2, . . . , M/(presetlength)} and the linear trend of each sub-sequence, wherein δz=zb−{circumflex over (z)}b, and <(δz)2> represents calculating the average of (δz)2 in each time window.
In step S28, the long-range correlation index β is obtained in accordance with the detrended fluctuation function FD(presetwidth) by means of following formula:
wherein,
represents the relation between detrended fluctuation function and the time scaling defined by the preset width of the time window derived from log-log plot.
More specifically, the matching device is configured to implement following steps.
In step S31. γ is calculated in accordance with the scaling index α and the long-range correlation index β, wherein, γ=|α−β|.
In step S32, if γ is within a second given range, the music file with a long-range correlation index β is matched with the electroencephalogram with a scaling index α.
The system in accordance with the present embodiment may automatically find music files matching with human brain statements in real time by measuring an electroencephalogram, and then guide people relieve stress and relax effectively.
While embodiments of this disclosure have been shown and described, it will be apparent to those skilled in the art that more modifications are possible without departing from the spirits herein. The disclosure, therefore, is not to be restricted except in the spirit of the following claims.
The method and system for matching music files with an electroencephalogram in accordance with the present invention may select corresponding music files in accordance with different electroencephalogram. In other words, the method and system in accordance with the present invention may automatically find music files matching with human brain statements in real time by measuring an electroencephalogram, and then guide people relieve stress and relax effectively.
Number | Date | Country | Kind |
---|---|---|---|
2014 1 0360309 | Jul 2014 | CN | national |
This application is a continuation of International Application No. PCT/CN2014/086917, filed Sep. 19, 2014. The International Application claims priority to Chinese Patent Application No. 201410360309.1, filed on Jul. 25, 2014. The afore-mentioned patent applications are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
9320450 | Badower | Apr 2016 | B2 |
20080140716 | Saito et al. | Jun 2008 | A1 |
Number | Date | Country |
---|---|---|
101259015 | Jun 2008 | CN |
102446533 | May 2012 | CN |
102446533 | May 2012 | CN |
103412646 | Nov 2013 | CN |
20080111972 | Dec 2008 | KR |
Entry |
---|
International Search Report dated Mar. 24, 2015 from PCT Application No. PCT/CN2014/086917. |
Number | Date | Country | |
---|---|---|---|
20160086612 A1 | Mar 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2014/086917 | Sep 2014 | US |
Child | 14957415 | US |