Method for generating at least one sequence of random numbers approximated to sequences of numbers of a 1/f noise

Information

  • Patent Application
  • 20030101031
  • Publication Number
    20030101031
  • Date Filed
    November 07, 2002
    22 years ago
  • Date Published
    May 29, 2003
    21 years ago
Abstract
A method for generating a sequence of random numbers approximated to sequences of numbers of a 1/f noise is based on utilizing (0,1)-normally-distributed random numbers to make possible the generation of an arbitrary-length sequence of random numbers representing a good approximation to random numbers of a 1/f noise while limiting the computation time for determining such sequences of random numbers.
Description


BACKGROUND OF THE INVENTION


FIELD OF THE INVENTION

[0001] The invention relates to a method for generating at least one sequence of random numbers approximated to sequences of numbers of a 1/f noise.


[0002] Random numbers of a 1/f noise can be used, for example, in a transient circuit simulation that takes account of noise influences. A 1/f noise is understood to mean a stochastic process with a specific frequency spectrum that can be described by the equation:
1S(f)α1fβ,β]0.1[.


[0003] 1/f noise sources are suitable for modeling noise influences in a multiplicity of technical and physical systems and also for systems for estimating and predicting events on the financial markets. In particular, many electronic components, such as pn diodes and MOS field-effect transistors, for example, have 1/f noise sources.


[0004] German Patent DE 100 01 124 C1 discloses a circuit configuration and also a method for reducing the 1/f noise of MOSFETs in an electronic circuit, in particular, in an integrated circuit that has at least one MOSFET. In such a case, one, a plurality or all of the MOSFETs are assigned at least one DC voltage source for setting one or a plurality of constant operating points of the MOSFET or MOSFETs.


[0005] U.S. Pat. No. 5,719,784 to Clark et al. describes a device and a method for the analysis of data that are obtained from tissue samples and are present in frequency spectra. In such a case, the non-random nature of cell and tissue microstructures purportedly corresponds to a 1/f noise. This reference explicitly states that there are no known mathematical models for generating 1/f noise.


[0006] U.S. Pat. No. 6,188,294 to Ryan et al. discloses a method in which white noise is coupled differentially into an amplification stage. In such a case, the white noise is amplified differentially by an amplifier that generates a first signal with a white noise. The 1/f components of the noise and also a displacement voltage present are substantially removed from this first signal, thereby producing a second signal with a white noise. A random sequence signal is, then, generated by a decision being taken as to whether the generated second signal in each case assumes the values of zero or one.


[0007] U.S. Pat. No. 6,081,228 to Leimer discloses a method for estimating the phase noise component in a GPS receiver, which can, then, be removed before the tracking of each satellite. This reference also discloses a method for reducing the phase noise in a multichannel radio receiver, in which the multiplicity of receiver channels derives a clock signal from a source circuit.



SUMMARY OF THE INVENTION

[0008] It is accordingly an object of the invention to provide a method for generating an arbitrary-length sequence of random numbers that is approximated to sequences of random numbers of a 1/f noise that overcomes the hereinafore-mentioned disadvantages of the heretofore-known devices and methods of this general type and that is carried out rapidly and with low computation complexity that does not grow beyond measure for long sequences of numbers. Also provided is an improved method for the simulation of a technical system that is subject to a 1/f noise. Finally provided is a computer system with a computer program for determining sequences of random numbers approximated to sequences of numbers of a 1/f noise that can be executed rapidly and that takes up only few resources of a computer system. The technical system can be an electronic component, such as a pn diode, a MOS field-effect transistor, or a phase-locked loop.


[0009] According to the invention, the problem of noise simulation in the modeling of the system to be simulated is converted into the problem of generating a sequence of random numbers. In such a case, the correlations of these random numbers are determined and used for simple and accurate generation of the sequences of random numbers.


[0010] The invention's method for generating at least one sequence of random numbers of a 1/f noise firstly provides the following steps:


[0011] determination of a desired spectral value β; and


[0012] determination of an intensity constant const.


[0013] The characteristics of the 1/f noise to be simulated are, thus, defined.


[0014] Afterward, the number of random numbers to be generated (which are approximated to random numbers of a 1/f noise), a start value for a running variable n used for the simulation, and a value for a window size d are defined.


[0015] The invention provides the loop-like repetition of steps until the desired number of elements y(n) of one or more vectors y of length n has been calculated from 1/f-distributed random numbers. The first two of these steps to be repeated in a loop-like manner are the step of incrementing the present value of the running variable n by the value 1 and the step of defining a simulation time step [tn−1; tn].


[0016] The remaining method steps to be repeated in a loop-like manner are dependent on the value of the running variable n. Before each loop-like iteration of the method steps, a check is made to determine whether the value of the running variable n is made less than or greater than the defined value of the window size d. Respectively different method steps are, then, provided for the case (n<d) or (n≧d).


[0017] To simplify the notation, the following indexing of the submatrices and subvectors is performed hereinafter: the indexing relates to the associated points in time and, thus, does not begin with 1 for submatrices and subvectors in the case (n>d). By way of example, a d-dimensional vector is indexed by (n−d+1), (n−d+2), . . . , n. The n-th element is the element indexed by n, that is to say, the last component of the d-dimensional vector. The same applies analogously to submatrices.


[0018] For the case (n<d), the following steps are repeated in a loop-like manner:


[0019] determination of the elements Cij of a covariance matrix C(n) of dimension (n×n) according to the following specification:






C




ij


:=const·
(−|tj−ti|β+1+|tj−1−ti|β+1+|tj−ti−1|β+1−|tj−1−ti−1|β+1),



[0020] i,j=1, . . . , n


[0021] determination of the inverted covariance matrix C−1(n)=B(n) of dimension (n×n);


[0022] determination of a variable σ in accordance with the specification σ=sqrt(1/e(n,n)), where sqrt designates the “square root” function and where e(n,n) designates that element of the inverted covariance matrix B(n) that is indexed by (n,n);


[0023] determination of a (0,1)-normally-distributed random number that forms the n-th component of a vector x of length n;


[0024] formation of a variable μ from the first (n−1) components of the n-th row of the inverted covariance matrix B(n) and from the (n−1) elements of the vector y that were calculated for a preceding (n−1)-th simulation time step, to be precise in accordance with the following specification:
2μ:=-y(n-1)T·B=·,nB=n,n


[0025] where y(n−1) designates the first (n−1) components of the vector y, where B•,n designates the first (n−1) components of the n-th row of the inverted covariance matrix B(n), and where Bn,n designates that element of the inverted covariance matrix B(n) that is indexed by (n,n); and


[0026] calculation of an element y(n) of a vector y of length n from 1/f-distributed random numbers according to the following specification:




Y
(n)=x(n)*σ+μ.



[0027] For the case (n≧d), the following steps are repeated in a loop-like manner:


[0028] determination of the elements Cij of a covariance matrix C(n) of dimension (d×d) according to the following specification:






C




ij


:=const·
(−|tj−ti|β+1+|tj−1−ti|β+1+|tj−ti−1|β+1−|tj−1−ti−1|β+1),



[0029] i,j=(n−d+1), . . . , n


[0030] determination of the inverted covariance matrix C−1(n)=B(n) of dimension (d×d),


[0031] determination of a variable σ in accordance with the specification:


σ=sqrt(1/e(n,n)),


[0032] where sqrt designates the “square root” function and where e(n,n) designates that element of the inverted covariance matrix B(n) that is indexed by (n,n);


[0033] determination of a (0,1)-normally-distributed random number that forms the n-th component of a vector x of length n;


[0034] formation of a variable μ from the first (d−1) components of the n-th row of the inverted covariance matrix B(n) and from the last (d−1) elements of the vector y that were calculated for a preceding (n−1) simulation time step, to be precise in accordance with the following specification:
3μ:=-γ(n-1)T·B=·,nB=n,n


[0035] where y(n−1) designates the last (d−1) components of the vector y, where B•,n designates the first (d−1) components of the n-th row of the inverted covariance matrix B(n), and where Bn,n designates that element of the inverted covariance matrix B(n) that is indexed by (n,n); and


[0036] calculation of an element y(n) of a vector y of length n from 1/f-distributed random numbers according to the following specification:




Y
(n)=x(n)*σ+μ.



[0037] A basic concept of the present invention resides in providing a method that makes it possible to generate sequences of random numbers—based on 1/f-distributed random numbers—progressively, that is to say, element by element. In such a case, the method ensures that each newly generated random number depends correctly in the stochastic sense on the previously generated random numbers based on 1/f-distributed random numbers. As a result, it is possible to generate the respectively required random numbers in the course of the numerical simulation of a circuit.


[0038] A further basic concept of the invention lies in limiting the dimension of the covariance matrix C(n) required for generating the random numbers and the inverse B(n) of the matrix to the value of a predeterminable window size d and, thus, keeping the computation time of the method according to the invention within limits. Such a concept is based on the insight that the complexity for setting up and inverting the covariance matrix C(n) grows at least quadratically with the number of time steps to be taken into account. In the absence of limitation of the dimension of the covariance matrix C(n) and the inverse B(n) thereof, the computation complexity for long simulation intervals, as are necessary, for example, for the simulation of phase locked loops, grows to a disproportionate size so that a transient noise simulation over the desired period of time becomes impossible. Restricting the growth of the covariance matrix C(n) and the inverse B(n) thereof advantageously restricts the computation time required for the simulation of 1/f noise sources and also the amount of memory required.


[0039] The method according to the invention accordingly ensures an arbitrary-length simulation of technical systems. Proceeding from 1/f-distributed random numbers that have already been generated, additional 1/f-distributed random numbers can be generated in a simple manner. In such a case, it is possible to place a simulation on the results of previously simulated time intervals. This so-called restart capability represents a very important property for simulation practice. It is precisely for 1/f noise sources that this can only be achieved with difficulty because random numbers that simulate a 1/f noise source for a certain time interval depend on random numbers of earlier time intervals that have already been determined numerically.


[0040] The present invention also allows the use of an adaptive step size control without, thereby, significantly increasing the computation times for the simulation of a technical system. Such an adaptive step size control considerably increases the precision and the computation time efficiency in the numerical determination of the dynamic range of a simulated technical system. The provision of variable step sizes also enables an adaptation to present system dynamic ranges, which increases the accuracy of the simulations.


[0041] The invention uses the theory of conditional probability densities to generate a 1/f-distributed random number that ensures the stochastic relationship between this random number and the random numbers that have already been generated and are required for preceding simulation steps.


[0042] In another refinement of the method according to the invention, a vector C12T(n) and a matrix C22(n) are determined from the covariance matrix C(n) according to:
4C(n)=(c11(n)c12(n)c12T(n)C22(n)).


[0043] An inverted covariance matrix C−1(n)=B(n) of dimension (d×d) is determined utilizing an inverted matrix C22−1(n−1) by Schur complement techniques. A vector b12(n) and a matrix B22(n) are determined from the inverted covariance matrix B(n) according to:
5B(n)=(b11(n)b12(n)b12T(n)B22(n)).


[0044] An inverted matrix C22−1(n) is determined according to:




C


22


−1
(n)=(Id−1−b12(n)c12T(n))−1·B22(n),



[0045] where Id−1 is a unit matrix of dimension ((d−1)×(d−1)).


[0046] In a particularly advantageous refinement of the method according to the invention, the inverted covariance matrix B(n) is not completely recalculated for every loop-like repetition of the relevant method steps, rather, the results that have already been determined are used to calculate the covariance matrix C(n−1) and the inverted covariance matrix B(n−1).


[0047] In such a case, instead of the step of determining the inverted covariance matrix C−1(n)=B(n) of dimension (d×d), the following steps are provided:


[0048] determination of a vector c12T(n−1) and a matrix C22(n−1), from the covariance matrix C(n−1):
6C(n-1)=(c11(n-1)c12(n-1)c12T(n-1)C22(n-1)).


[0049] In such a case, c11(n−1) is that element of C(n−1) that is indexed by (n−d+1, n−d+1), that is to say, the element at the top left, c12T(n−1) is the vector including the last (d−1) elements of that row of C(n−1) that is indexed by (n−d+1), that is to say, the topmost row, and C22(n−1) is the bottom right submatrix of dimension (d−1)×(d−1) of C(n−1).


[0050] Determination of the inverted covariance matrix C−1(n−1) B(n−1) of dimension (d×d). In such a case, B(n−1) is already known from the preceding step.


[0051] Determination of the vector b12(n−1) and the matrix B22(n−1) from the inverted covariance matrix B(n−1):
7B(n-1)=(b11(n-1)b12(n-1)b12T(n-1)B22(n-1)).


[0052] In such a case, b11(n−1) is that element of B(n−1) that is indexed by (n−d+1, n−d+1), that is to say, the element at the top left, b12T(n−1) is the vector including the last (d−1) elements of that row of B(n−1) that is indexed by (n−d+1), that is to say, the topmost row, and B22(n−1) is the bottom right submatrix of dimension (d−1)×(d−1) of B(n−1).


[0053] Determination of the inverted matrix C22−1(n−1), according to the following specification:




C


22


−1
(n−1)=(Id−t−b12(n−1)c12T(n−1))−1·B22(n−1)



[0054] where Id−1 represents the unit matrix of dimension ((d−1)×(d−1));


[0055] determination of the inverted covariance matrix B(n) of dimension (d×d) using the inverted matrix C22−1(n−1) by Schur complement techniques.


[0056] Such an embodiment of the invention avoids an explicit recalculation—which is highly computation-time-intensive with a rising dimension of the covariance matrix C(n)—of the inverted covariance matrix B(n) for each value of the running variable n. Instead of this, the inverted covariance matrix B(n) is determined using Schur complement techniques known to the person skilled in the art, recourse being had to the variables C22(n−1), c12T(n−1), C22−1(n−1), B22(n−1), b12T(n−1) that have already been determined.


[0057] With the aid of the Sherman-Morrison-Woodbury formula, the inverted submatrix C22−1(n−1) can be calculated without explicit inversion. The Sherman-Morrison-Woodbury formula is disclosed in G. H. Golub and C. F. van Loan. Matrix Computations. John Hopkins University Press, page 51, 2nd edition, 1989.


[0058] Simulations that require a high number of time steps are actually made possible in the first place by this embodiment of the invention. The implementation of the invention's concept of limiting the computation time of the method according to the invention resides in the fact that if the covariance matrix C(n) reaches and exceeds a freely selectable window size d, a window having a size (d×d) is placed over the covariance matrix C(n), which window selects a bottom right submatrix of dimension (d×d) from the covariance matrix.


[0059] With each further time step, this window (d×d) is shifted by one position toward the bottom right within the covariance matrix C(n) so that, after reaching the freely selectable window size d, only the window (d×d) of the covariance matrix C(n) is considered and only this part of the covariance matrix C(n) is inverted. The computation complexity is, thus, kept virtually constant for rising sizes of the covariance matrix C(n). The complexity for shifting the covariance matrix window with the size (d×d) can also be kept low by virtue of the updating of the inverted covariance matrix B(n) by the above formula instead of the explicit inversion.


[0060] In a particularly advantageous refinement of the method according to the invention, q sequences of random numbers of a 1/f noise are calculated simultaneously, in which case, instead of the steps that are to be repeated in a loop-like manner:


[0061] determination of a (0,1)-normally-distributed random number that forms the n-th component of a vector x of length n,


[0062] formation of a variable μ from the first (d−1) components of the n-th row of the inverted covariance matrix B(n) and from the last (d−1) elements of the vector y that were calculated for a preceding (n−1) simulation time step, to be precise in accordance with the following specification:
8μ:=-γ(n-1)T·B=·,nB=n,n


[0063] where y(n−1) designates the first (d−1) components of the vector y, B•,n designates the first (d−1) components of the n-th row of the inverted covariance matrix B, and where Bn,n designates that element of the inverted covariance matrix B that is indexed by (n,n);


[0064] calculation of an element y(n) of a vector y of length n from 1/f-distributed random numbers according to the following specification:




Y
(n)=x(n)*σ+μ



[0065] the following steps are provided:


[0066] determination of q (0,1)-normally-distributed random numbers xk,n that form the respective last component of the vectors xk of length n, where k=1, . . . , q. In this case, it should be taken into account that the respective first (n−1) components of the vectors xk had already been calculated in the step before;


[0067] Formation of q variables μk in accordance with the following specification:
9μk:=-γ(n-1),kT·B=·,nB=n,n


[0068] where y(n−1),k designates the last (d−1) components of the vector yk that were calculated for a preceding simulation time step, B•,n designates the first (d−1) components of the n-th row of the inverted covariance matrix B, and Bn,n designates that element of the inverted covariance matrix B that is indexed by (n,n). This is carried out for k=1, . . . , q;


[0069] calculation of q elements yk,n that form the respective n-th component of the vector yk of length n from 1/f-distributed random numbers; to be precise, according to the following specification:






y




k,n


=x


k,n
*σ+μk



[0070] where k=1, . . . , q.


[0071] The q vectors yk (k=1, . . . , q) of length n from 1/f-distributed random numbers are, particularly, advantageously disposed in a matrix NOISE, which, in a simulation, specify the 1/f noise influences of a system to be simulated.


[0072] According to the invention, the concept for the simulation of 1/f noise is based on the following train of thought. The dynamic range of a system exposed to stochastic influences is adequately modeled by a stochastic process. For the simulation of such a system dynamic range, generally individual random realizations or paths of the underlying stochastic process are calculated numerically. For the simulation of systems with 1/f noise sources, it is a matter of numerically calculating paths of stochastic integrals of the form
100tY(s)η1f(s)s.


[0073] In such a case, s designates the integration variable, t, as an upper integration limit, designates the time,
11η1f(s)s


[0074] designates a 1/f noise source, and Y(s) designates a stochastic process that describes the temporal dynamic range of a variable, e.g., the electrical voltage in the circuit simulation.


[0075] If BFBM(s) is used to designate that stochastic process whose derivative (mathematically: derivative in the distribution sense) produces the 1/f noise process
12η1f(s),


[0076] then the stochastic integral to be calculated can be written as
130tY(s)η1f(s)s=0tY(s)BFBM(s).(1.1)


[0077] The integral of the right-hand side is to be interpreted as a Riemann-Stieltjes integral of the stochastic process Y(s) with the process BFBM(s) as integrator. This integral can be approximated by a sum by breaking down the integration interval [0, t] in accordance with 0≡t{umlaut over (0)}<ti< . . . <t≡t into n disjoint subintervals [ti, ti−1], i=1, . . . n:
140tY(s)BFBM(s)i=1nY(ti-1)[BFBM(ti)-BFBM(ti-1)](1.2)


[0078] This sum is a random variable. The dependence on the result ω of the random experiment has been consistently omitted.


[0079] A process BFBM(s) whose generalized derivative has a 1/f spectrum is disclosed in R. Barton and H. V. Poor. Signal Detection in Fractional Gaussian Noise. IEEE Transactions and Information Theory, pages 943-959, 1988, for example, under the name “Fractional Brownian Motion”. BFBM(s) is a Gaussian stochastic process and, as such, is completely characterized by its expected value




E
(BFBM(s))=0∀sεR  (1.3)



[0080] and by its covariance function




Cov
(BFBM(s),BFBM(t))=const·(|s|β+1+|t|β+1−|t−s|β+1)  (1.4)



[0081] The method according to the invention for the demand-oriented generation of suitable random numbers reduces the simulation of 1/f noise influences substantially to the generation of realizations of the random variable [BFBM(ti)−BFBM(ti−1)], that is to say, of growths of Fractional Brownian Motion.


[0082] The present invention allows the required realizations of the random variable ΔBFBM(i) to be generated on line, i.e., in the course of the progressive integration of the system equations. This results in two requirements imposed on the method:


[0083] (a) the length n of the sequence of random numbers {ΔBFBM(1), . . . , ΔBFBM(n)} must remain variable during a simulation run. In particular, it must be possible, at any time, to lengthen the simulation. Such a characteristic is designated as restart capability and implies the capability of the method to generate the additional random numbers required for this purpose such that they correlate correctly with the subsequence that has already been generated; and


[0084] (b) Let ti be the time presently reached in the course of a simulation. The time interval [ti, ti+1], that is to say, the step size of the next integration step, must, then, be able to be determined from the instantaneous system dynamic range, that is to say, adaptively.


[0085] The invention meets both requirements by specifying a specification as to how a realization of {ΔBFBM(1), . . . , ΔBFBM(n)}, that is to say, a sequence of random numbers, can be generated progressively, i.e., element by element. In such a case, the step size Δt1:=ti−ti−1 is freely selectable for each new random number.


[0086] Firstly, the approach is investigated for so-called “conditional densities.”


[0087] The distribution of the random variable vector (ΔBFBM(1), . . . , ΔBFBM(n)) is first considered.


[0088] Because the individual random variables ΔBFBM(i) represent growths of a Gaussian stochastic process, the random variable vector (ΔBFBM(1), . . . , ΔBFBM(n)) is an n-dimensional random variable having Gaussian distribution and is, thus, completely determined by its (n-dimensional) expected value E and its covariance matrix C. The two variables can be calculated from the formulae (1.3) and (1.4) as




E
(ΔBFBM(i))=0,  (3.5)



[0089] i=1, . . . , n






C




ij


:=Cov
(ΔBFBM(i),ΔBFBM(j))=const·(−|tj−ti|β+1+|tj−1−ti|β+1+|tj−ti−1|β+1−|tj−1−ti−1|β+1),  (3.6)



[0090] i,j=1, . . . , n.


[0091] The online method according to the invention will now be specified in the form of a complete induction.


[0092] The induction start and, thus, the starting point of the method is the realization of a real-value Gaussian distribution with expected value 0 and variance
1511=2·const·&LeftBracketingBar;Δt1&RightBracketingBar;β+1.


[0093] In the sense of an induction end, we must specify how we extend a realization of (ΔBFBM(1), . . . , ΔBFBM(n−1)) by a realization of ΔBFBM(n) to produce overall a realization of (ΔBFBM(1), . . . , ΔBFBM(n)). To simplify the notation, the already “scrambled” subsequence of random numbers shall be designated by (y1, . . . ,yn−1)=:y(n−1)T and the realization of ΔBFBM(n) that is yet to be scrambled shall be designated by yn.


[0094] The problem can now be formulated as follows: let there be an n-dimensional mean-free Gaussian random variable Z with the covariance matrix C. Let the first n−1 elements of a realization of Z be already scrambled and known in the form of a random number vector y(n−1). What is now sought is the distribution from which the n-th element yn must be drawn in order to complete y(n−1) to form a realization y=(y(n−1),yn) of Z.


[0095] A solution to such a task can be found if we consider the conditional probability density f(yn|y(n−1)) for yn under the condition that y(n−1) is already fixed. This variable can be calculated in the present case of a Gaussian normal distribution as
16f(yn|y(n-1))=12π1Cn,n-1__exp{-121Cn,n-1__(yn-μ)2}.(3.7)


[0096] In this case, the variable Cn,n1 results from the following notation of the inverted covariance matrix C−1:
17C__-1=(C__(n-1)-1C__,n-1&AutoLeftMatch;(C,n-1&AutoLeftMatch;__)TC__n,n-1),(3.8)


[0097] where C(n−1)−1εR(n−1)×(n−1), where C•,n−1εR(n−1), and where Cn,n−1εR.


[0098] The variable μ stands for
18μ:=-γ(n-1)T·C__,n-1C__n,n-1(3.9)


[0099] The conditional density f(yn|y(n−1)) is thus the probability density of a Gaussian normal distribution with mean μ and variance
19variance1C__n,n-1.


[0100] So that the above variance exists, Cn,n−1≠0 must hold true. This is ensured on account of the following argumentation:


[0101]

C
and C−1 have the same intrinsic directions and inverse eigenvalues. An eigenvalue 0 of the matrix C−1 would, thus, result in an infinite variance of the random variable vector (ΔBFBM(1), . . . , ΔBFBM(n)). It can, therefore, be assumed that all eigenvalues of C−1 are not equal to zero. Because the eigenvalues of C−1 are always non-negative, the following, thus, holds true: the matrix C−1 is symmetrical and positive definite.


[0102] By renaming the coordinate axes, this matrix can be brought from the form (3.8) to the following form:
20C___-1:=(C___n,n-1(C___,n-1)TC___,n-1C___(n-1)-1),(3.10)


[0103] This matrix is, likewise, symmetrical and positive definite in terms of its construction. In accordance with the Sylvester criterion for symmetrical and positive definite matrices, it follows from this that ({overscore (C)}−1)11=Cn,n−1>0, and the assertion is shown. Through the method according to the invention, the simulation of
211f


[0104] noise sources is reduced to the generation of random numbers with Gaussian distribution.


[0105] To generate a random number yn that correlates with an already generated sequence y(n−1) in the manner demanded, the inverted covariance matrix C−1 (an n×n matrix) is required. Strictly speaking, only knowledge of the n-th row of this matrix is necessary, that is to say, knowledge of ((C*,n−1)T, Cn,n−1). As can be read from formula (3.6), the covariance matrix C depends on the breaking up of the simulation interval [0, tn] into disjoint subintervals (step sizes) [ti−1, ti]. In particular, the last column of C (identical to the last row due to the symmetry of C) depends on tn and, thus, on the present step size Δtn=tn−tn−1.


[0106] The top left (n−1)×(n−1) submatrix {tilde over (C)} of the n×n covariance matrix C is precisely the covariance matrix for a random number sequence of length n−1. Such a covariance matrix already had to be determined and inverted for the calculation of y(n−1) (or for the calculation of the last element y(n−1)). To accelerate the method, it is, thus, possible to have recourse to incremental methods for matrix inversion, e.g., by the Schur complement.


[0107] To determine the inverted covariance matrix C−1(n+1)=B(n+1) of the next time step (n+1), it is necessary to consider the covariance matrix C(n) and the inverted covariance matrix C−1(n)=B(n) thereof as follows:
22C(n)=(c11(n)c12(n)|c12T(n)C22(n))(3·11)B(n)=(b11(n)b12(n)|b12T(n)B22(n))(3.12)


[0108] In this case, the following holds true:




B
(n)·C(n)Id  (3.13)



[0109] where Id designates the d-dimensional unit matrix. From the relationship B(n)·C(n)=Id, the following results with the above representations of the covariance matrix C(n) and the inverted covariance matrix B(n):




b


12
(n)c12T(n)+B22(n)C22(n)=Id−1  (3.14)



and, thus:




C


22


−1
(n)=(Id−1−b12(n)c12T(n))−1·B22(n)  (3.15)



[0110] With the aid of the Sherman-Morrison-Woodbury formula disclosed in G. H. Golub and C. F. van Loan. Matrix Computations. John Hopkins University Press, page 51, 2nd edition, 1989, the inverted submatrix C22−1(n) of the submatrix C22(n) can be calculated without explicit inversion of (Id−1−b12(n)c12T(n))−1 as:
23C22-1(n)=(Id-1+11-b12(n)c12T(n)b12(n)c12T(n))-1·B22(n)(3.16)


[0111] The corrected inverse C22−1(n) of the covariance matrix C22(n) is, thus, available, which can be extended by Schur complement techniques, for example, in the next time step (n+1).


[0112] The invention is also realized in a method for the simulation of a technical system that is subject to a 1/f noise. In such a case, random numbers that have been determined by a method according to the invention are used in the modeling and/or in the definition of the variables present on input channels of the system.


[0113] The invention is, furthermore, realized in a computer program for executing a method for generating at least one sequence of random numbers approximated to random numbers of a 1/f noise.


[0114] In such a case, the computer program is configured such that, after the inputting of a desired spectral value β, after the inputting of the number of random numbers to be generated, after the inputting of an intensity constant const, after the inputting of a start value for a running variable n and after the inputting of a value for a window size d, a method according to the invention can be carried out in an embodiment described above. In such a case, a sequence of random numbers that are each approximated to a 1/f noise is output as a result of the method. Such a sequence of random numbers can be used to model noise influences in a multiplicity of technical and physical systems.


[0115] The computer program improved according to the invention results in improved applicability of the method to a multiplicity of technical and physical systems, in simple and effective generation of sequences of random numbers and run time improvement, and also in optimization of computing power.


[0116] The computer program according to the invention, furthermore, results in a broad application spectrum of the method for a multiplicity of technical and physical systems and simple and effective generation of random numbers approximated to random numbers of a 1/f noise, a fast run time, and an optimal use of computing power being ensured.


[0117] The invention additionally relates to a computer program that is contained on a storage medium, which is stored in a computer memory, which is contained in a random access memory or which is transmitted on an electrical carrier signal.


[0118] Furthermore, the invention relates to a data carrier with such a computer program and to a method in which such a computer program is downloaded from an electrical data network, such as from the Internet, for example, onto a computer connected to the data network.


[0119] Other features that are considered as characteristic for the invention are set forth in the appended claims.


[0120] Although the invention is illustrated and described herein as embodied in a method for generating at least one sequence of random numbers approximated to sequences of numbers of a 1/f noise, it is, nevertheless, not intended to be limited to the details shown because various modifications and structural changes may be made therein without departing from the spirit of the invention and within the scope and range of equivalents of the claims.


[0121] The construction and method of operation of the invention, however, together with additional objects and advantages thereof, will be best understood from the following description of specific embodiments when read in connection with the accompanying drawings.







BRIEF DESCRIPTION OF THE DRAWINGS

[0122]
FIG. 1 is a block circuit diagram of a technical system to be simulated;


[0123]
FIG. 2 is a structogram illustrating a determination of sequences of random numbers of a 1/f noise according to the invention;


[0124]
FIGS. 3A to 3F are numerical matrices illustrating an example calculation for a simulation time step [t4, t5]=[2.75, 3.00] according to the invention; and


[0125]
FIGS. 4A to 4C are numerical matrices illustrating an example calculation for a sixth simulation time step [t5, t6]=[3.000, 4.000] according to the invention.







DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0126] Referring now to the figures of the drawings in detail and first, particularly to FIG. 1 thereof, there is shown a schematic illustration of a noisy system that is to be simulated.


[0127] The system is described by a system model 1, which is indicated as a box and describes the system behavior. The system behavior results from the input channels 2, which are also designated as vector INPUT and from the output channels 3 that are also designated as OUTPUT. Furthermore, a system-dictated noise is provided, which is present on noise input channels 4 and which is also designated as vector or as matrix NOISE. A matrix NOISE is present when the noise is taken into account with a plurality of channels, each column of the matrix NOISE containing a vector of noise values that are present on a noise input channel.


[0128] The noise on the noise input channels 4 is, preferably, interpreted as a noise-dictated alteration of the system model 1.


[0129] The behavior of the input channels 2 and of the output channels 3 can be described by a system of differential equations or by a system of differential-algebraic equations to make possible reliable predictions of the system behavior.


[0130] For each time step of the simulation of the system shown in FIG. 1, a vector OUTPUT of the output channels 3 is calculated for a vector INPUT present on the input channels 2 and for a vector NOISE present on the noise input channels 4.


[0131] For the simulation over a relatively long period of time, the vectors INPUT, OUTPUT, NOISE are expediently specified as a matrix, a respective column k of the relevant matrix containing the values of the corresponding time series of the relevant INPUT, OUTPUT, NOISE.


[0132]
FIG. 2 illustrates how a respective vector yk is attained that forms a column k of the matrix NOISE for the noise input channels 4 of the system model 1. Each vector yk serves for the simulation of a noise source.


[0133] In a first step a desired spectral value β, an intensity constant const, and a window size d are defined. Furthermore, the counter n of the present simulation time interval is set to a start value, which is assumed to be (n≧d) in the exemplary embodiment considered.


[0134] The following sequence of computation steps is, then, carried out progressively for each simulation time step.


[0135] Firstly, the present simulation time step is defined. Equivalently thereto, it is also possible to define the end of the present simulation time step, thereby producing the next point in time under consideration.


[0136] Afterward, the counter n of the present simulation time step is incremented by one.


[0137] The covariance matrix C(n) of dimension (d×d) is subsequently determined according to equation (3.6). In the case (n>d), only the bottom right (d×d)-sized window of the covariance matrix C(n) is determined. The variables i, j of equation (3.6) assume the values i,j=(n−d+1), . . . , n in such a case. In the case (n=d), the entire covariance matrix C(n) is determined for the values i,j=1, . . . , n.


[0138] During the determination of the covariance matrix C(n), recourse is had to the values of the ((d−1)×(d−1))-sized window of the covariance matrix C(n−1) calculated during the previous iteration of the method steps, which window is disposed at the bottom right in the covariance matrix C(n−1). The values of such a window form the values disposed in the top left ((d−1)×(d−1))-sized window in the covariance matrix C(n) that is to be newly determined. Accordingly, in the recalculation, only the values of the last row and of the last column of the covariance matrix C(n) are explicitly recalculated.


[0139] In the next step of the method according to the invention, the inverse B(n) of the covariance matrix C(n) is determined.


[0140] This is calculated explicitly for the case (n≦d), for example, by a Cholesky decomposition. For the case (n>d), the inverse B(n) is determined using Schur complement techniques to increase the efficiency. In such a case, recourse is had to the variables C22(n−1), C22−1(n−1), c12T(n−1), B22(n−1), and b12(n−1) determined in the last iteration of the method steps.


[0141] In the next two method steps, the auxiliary variables C22(n) and c12T(n) are determined from the covariance matrix C(n) using equation (3.11) and the auxiliary variables B22(n) and b12(n) are determined from the inverse B(n) using equation (3.12).


[0142] The inverted submatrix C22−1(n) is determined by equation (3.15) using the auxiliary variables c12T(n), B22 (n), and b12(n). This inverted submatrix C22−1(n) is required in the respective next iteration of the method steps for calculating the inverted covariance matrix B22(n).


[0143] Next, the variable σ is calculated from the formula


σ=sqrt(1/e(n,n))


[0144] where sqrt designates the “square root” function and where e(n,n) designates that element of the inverted covariance matrix B(n) that is indexed by (n,n).


[0145] In addition, a value of a (0,1)-normally-distributed random variable Xk is extracted and the vector xk of the normally distributed random numbers is, thereby, supplemented. The extracted random number has the expected value 0 and the variance 1. This step is carried out for each noise source to be simulated.


[0146] Furthermore, a variable μk is formed. It is formed, for (n≦d), from the first (n−1) components of the n-th row of the inverted covariance matrix B(n) and from the sequence of (n−1) 1/f-distributed random numbers that were calculated for the preceding (n−1) simulation time steps.


[0147] For (n>d), μk is formed from the first (d−1) components of the n-th row of the inverted covariance matrix B(n) and from the sequence of the last (d−1) 1/f-distributed random numbers that were calculated for the preceding (n−1) simulation time steps.


[0148] For such a purpose, the procedure is in accordance with formula (3.9). This step is carried out separately, in the case of a plurality of noise sources k present, for each noise source k to be simulated.


[0149] Finally, that element of the matrix NOISE is calculated whose column index k specifies the noise source to be simulated and whose row index is equal to n. The present simulation time step is designated thereby. The presently calculated element r(k,n) of the matrix NOISE represents a random number that, together with the superjacent (n−1) elements of the same column k of NOISE, forms a vector yk of length n from 1/f-distributed random numbers. Such a vector yk serves for the simulation of one of the noise sources for the first n simulation time steps.


[0150] Each element yk of the n-th row of NOISE is, then, determined, based upon equations (3.7)-(3.9), from the last random number xk of the vector x and the variables μk and σ, to be precise, according to the following specification:




y


k


=x


k
*σ+μk.



[0151]
FIGS. 3A to 3F and 4 show the implementation of the method according to the invention based upon a concrete calculation example.


[0152] In the calculation example, a sequence of random numbers that are based on random numbers of a 1/f noise is generated. Such a sequence of random numbers is stored in a vector y. The generation of these random numbers is, subsequently, carried out for the time steps t0=0.000, t1=1.000, t2=1.500, t3=2.000, t4=2.750, t5=3.000 and t6=4.000. In this case, for the purpose of simple representation, after precise calculation, the calculated values are cut off after the third decimal place. For easier assignment of the vectors and matrices used, the index of the time step for which this variable was respectively calculated is specified in brackets in each case. Thus, C(5) specifies the covariance matrix of the time step [t4, t5]=[2.750, 3.000], which relates to the points in time t0, . . . , t5.


[0153] In the calculation example, the value of the spectral value β is always assumed to be 0.5. The value of the intensity constant const is arbitrarily assumed to be 1.0, and d=5 was used as the window size.


[0154]
FIGS. 3A to 3F show a calculation example for a simulation time step [t4, t5]=[2.750, 3.000].


[0155] The 1/f-distributed random numbers of the points in time t1, . . . , t4 are assumed to be known in the following exemplary embodiment.


[0156]
FIG. 3A shows a covariance matrix C(5) of dimension (n×n)=(5×5) that is required for generating a further random number. The covariance matrix C(5) is determined according to equation (3.6).


[0157] By way of example, this is carried out on the element C(5,4), that is to say C(i,j) where i=5 and j=4.


[0158] Using equation (3.6), C(5,4) results as:
241.0.(-|t4-t5|05+1+|t4-1-t5|05+1+|t4-t5-1|05+1+|t4-1-t5-1|05+1)=(-|2.750-3.000|15+|2.000-3.000|15+|2.750-2.750|15-|2.000-2.750|15)=-0.125+1+0-0.649=0.225


[0159] In the next simulation step, the running variable n exceeds the window size d and, thus, the dimension of the covariance matrix C(6). Accordingly, in FIGS. 3B, 3C, 3E, and 3F, auxiliary parameters are determined that are required for the determination of the inverted covariance matrix B(6) of the next simulation time step n=6.


[0160]
FIG. 3B shows a vector c12T(5) determined from the covariance matrix C(5). The vector c12T(5) includes the second, third, fourth, and fifth elements of the first row of the covariance matrix C(5) and results from equation (3.11).


[0161]
FIG. 3C shows a submatrix C22(5) determined from the covariance matrix C(5). The submatrix C22(5) contains the elements of the ((d−1)×(d−1))=(4×4)-sized window, which is disposed at the bottom right in the covariance matrix C(5). The submatrix C22(5) results from the covariance matrix C(5) using equation (3.11).


[0162]
FIG. 3D shows an inverted covariance matrix C−1(5)=B(5) with respect to the covariance matrix C(5), which is designated exclusively as B(5) hereinafter. A check using equation (3.13) shows that matrix multiplication of the covariance matrix C(5) by the inverted covariance matrix B(5) produces the unit matrix Id.


[0163] In the present case, the inverted covariance matrix B(5) was calculated explicitly from the covariance matrix C(5) using a Cholesky decomposition (not specifically illustrated here).


[0164]
FIG. 3E shows a vector b12(5) determined from the inverted covariance matrix B(5). The vector b12(5) contains the second, third, fourth, and fifth elements of the first column of the inverted covariance matrix B(5). The vector b12(5) results from the inverted covariance matrix B(5) using equation (3.12).


[0165]
FIG. 3F shows a submatrix B22(5) of the inverted covariance matrix B(5) of FIG. 3D. This submatrix B22(5) includes those elements of the inverted covariance matrix B(5) that are contained in the bottom right window of dimension ((d−1)×(d−1))=(4×4) of the inverted covariance matrix B(5). The submatrix B22(5) results from the inverted covariance matrix B(5) using equation (3.12).


[0166] The variable σ can, then, be calculated from the element disposed at the bottom right in the inverted covariance matrix B(5), namely, the value B(5)5;5=4.787.


[0167] In the present exemplary embodiment, σ results as:


σ={square root}{square root over (1/e(n,n))}={square root}{square root over (1/4.787)}=0.457


[0168] Using the vector:






b


(5)5;1, . . . , 4=(−0.094, −0.109, −0.093, −0.749),



[0169] which contains the first, second, third, and fourth the fifth row of the inverted covariance matrix B(5), the value μ can be calculated. Formula (3.9) is used in this case.


[0170] Using the variables σ and μ thus determined, and also using randomly extracted, normally distributed random numbers x(n), it is possible to calculate y(5) according to the formula




y
(n)=x(n)·σ+μ.



[0171] In such a case, y(5) represents the fifth element of the vector y that has a sequence of random numbers that are approximated to random numbers of a 1/f noise.


[0172]
FIGS. 4A to 4C show a calculation example for a sixth simulation time step [t5, t6]=[3.000, 4.000]. The value of the running variable n is always equal to 6 for the sixth simulation time step.


[0173]
FIG. 4A shows an inverted submatrix C22−1(5) with respect to the submatrix C22(5) of FIG. 3C. The inverted submatrix C22−1(5) is not explicitly totally recalculated, but, rather, is produced using equation (3.16).


[0174] The auxiliary variables b12(5), c12T(5), and also B22(5) that are required in equation (3.16) for calculating the inverted submatrix C22−1(5) have been determined in the preceding steps. The variable Id−1 in equation (3.16) corresponds to the unit matrix for the dimension (d−1)=4.


[0175]
FIG. 4B shows a covariance matrix C*(6) of dimension (d×d)=(5×5). C*(6) is produced as the bottom right window (2, . . . , 6; 2, . . . , 6) of the covariance matrix C(6) with the dimension d=6.


[0176] For the calculation of the covariance matrix C*(6), recourse is had to the submatrix C22(5) that is shown in FIG. 3C and forms the top left window of dimension ((d−1)×(d−1))=(4×4) in the covariance matrix C*(6). Accordingly, only those elements of the covariance matrix C*(6) shown in FIG. 4b that are disposed in the fifth row or in the fifth column are recalculated.


[0177] By way of example, such a calculation is carried out using equation (3.6) for the element C(3,6):
251.0.(-|t6-t6|05+1+|t6-1-t3|05+1+|t6-t3-1|05+1-|t6-1-t3-1|05+1)=(-|4.000-2.000|15+|3.000-2.000|1.5+|4.000-1.500|15-|3.000-1.500|15)=-2.828+1+3.952-1.837=0.287


[0178]
FIG. 4C shows the inverted covariance matrix B*(6) of dimension d=5 with respect to the covariance matrix C*(6) of FIG. 4B.


[0179] A check using equation (3.13) reveals that the covariance matrix C*(6) shown in FIG. 4B multiplied by the inverted covariance matrix B*(6) shown in FIG. 4C produces the unit matrix Id.


[0180] The inverted covariance matrix B*(6) is not explicitly recalculated element by element, but, rather, is produced by Schur complement techniques using the auxiliary variables shown in FIGS. 3A TO 3F and those shown in FIG. 4A.


[0181] Using the bottom right value of the inverted covariance matrix B*(6), namely, the value B*(6)5;5=0.630, σ results as 1.259. Using the vector






B


*(6)5;1, . . . , 4=(−0.085, −0.078, −0.139, −0.507)



[0182] the value μ can be calculated.


[0183] Using the variables σ and μ thus determined and also using extracted, normally distributed random numbers x(n), a further random number y(6) is, then, calculated, which extends the vector y(n) of the random numbers approximated to random numbers of a 1/f noise.


[0184] The vector y(n) can be extended as desired by the method according to the invention. During such a calculation, the covariance matrix C(n), and the inverted covariance matrix B(n) thereof, required for determining the random numbers are limited in terms of the dimension n to the predetermined window size d. Such a measure keeps the computation complexity small enough for each simulation time step. Although the random numbers thus generated do not exactly match the random numbers of a 1/f noise, they, nevertheless, represent a very good approximation to them.


Claims
  • 1. A method for a numerical simulation of a technical system subject to a 1/f noise and having at least one input channel, which comprises: determining a desired spectral value β; determining an intensity constant const; determining a number of random numbers to be generated; defining a start value for a running variable n; defining a window size d; repeating the following steps in a loop until a desired number of elements y(n) of a vector y of length n has been calculated from 1/f-distributed random numbers: a) incrementing a present value of the running variable n by 1; b) defining a simulation time step [tn−1; tn] and:  when n<d: c) determining elements Cij of a covariance matrix C(n) of dimension (n×n) according to the formula:Cij:=const·(−|tj−ti|β+1+|tj−1−ti|β+1+|tj−ti−1|β+1−|tj−1−ti−1|β+1), where i,j=1, . . . , n; d) determining an inverted covariance matrix C−1(n)=B(n) of dimension (n×n); g) determining a variable σ according to the formula:σ=sqrt(1/e(n,n)), where: sqrt is a “square root” function; and e(n,n) designates an element of the inverted covariance matrix B(n) indexed by (n,n); h) determining a (0,1)-normally-distributed random number forming an n-th component of a vector x of length n; i) forming a variable μ from n−1 first components of an n-th row of the inverted covariance matrix B(n) and from n−1 elements of the vector y calculated for a preceding (n−1)-th simulation time step according to the formula:μ:=−y(n−1)T·B•,n/Bn,n where: y(n−1) designates n−1 first components of the vector y; B•,n designates n−1 first components of an n-th row of the inverted covariance matrix B(n), and Bn,n designates an element of the inverted covariance matrix B(n) indexed by (n,n); and k) calculating an element y(n) of the vector y of length n from 1/f-distributed random numbers according to the formula:Y(n)=x(n)*σ+μ,where values of the vector y of the 1/f-distributed random numbers are applied to the input channels of the technical system; and  when n≧d: e) determining elements Cij of a covariance matrix C(n) of dimension (d×d) according to the formula:Cij:=const·(−|tj−ti|β+1+|tj−1−ti|β+1+|tj−ti−1|β+1−|tj−1−ti−1|β+1), where i,j=(n−d+1), . . . , n; f) determining an inverted covariance matrix C−1(n)=B(n) of dimension (d×d); g) determining a variable σ according to the formula:σ=sqrt(1/e(n,n)), where: sqrt is a “square root” function; and e(n,n) designates an element of the inverted covariance matrix B(n) indexed by (n,n); h) determining a (0,1)-normally-distributed random number forming an n-th component of a vector x of length n; j) forming a variable μ from d−1 first components of an n-th row of the inverted covariance matrix B(n) and from d−1 last elements of the vector y calculated for a preceding (n−1)-th simulation time step according to the formula:μ:=−y(n−1)T·B•,n/Bn,n where: y(n−1) designates the last (d−1) components of the vector y; B•,n designates d−1 first components of an n-th row of the inverted covariance matrix B(n); and Bn,n designates an element of the inverted covariance matrix B(n) indexed by (n,n); and k) calculating an element y(n) of the vector y of length n from 1/f-distributed random numbers according to the formula:Y(n)=x(n)*σ+μ,where values of the vector y of the 1/f-distributed random numbers are applied to the input channels of the technical system.
  • 2. A method for generating at least one sequence of random numbers approximated to sequences of numbers of a 1/f noise for a numerical simulation of a technical system subject to a 1/f noise on a computer system, which comprises: determining a desired spectral value β; determining an intensity constant const; determining a number of random numbers to be generated; defining a start value for a running variable n; defining a window size d; repeating the following steps in a loop until a desired number of elements y(n) of a vector y of length n has been calculated from 1/f-distributed random numbers: a) incrementing a present value of the running variable n by 1; b) defining a simulation time step [tn−1; tn] and:  when n<d: c) determining elements Cij of a covariance matrix C(n) of dimension (n×n) according to the formula:Cij:=const·(−|tj−ti|β+1+|tj−1−ti|β+1+|tj−ti−1|β+1−|tj−1−ti−1|β+1), where i,j=1, . . . , n; d) determining an inverted covariance matrix C−1(n)=B(n) of dimension (n×n); g) determining a variable σ according to the formula:σ=sqrt(1/e(n,n)), where: sqrt is a “square root” function; and e(n,n) designates an element of the inverted covariance matrix B(n) indexed by (n,n); h) determining a (0,1)-normally-distributed random number forming an n-th component of a vector x of length n; i) forming a variable μ from n−1 first components of an n-th row of the inverted covariance matrix B(n) and from n−1 elements of the vector y calculated for a preceding (n−1)-th simulation time step according to the formula:μ:=−y(n−1)T·B•,n/Bn,n where: y(n−1) designates n−1 first components of the vector y; B•,n designates n−1 first components of an n-th row of the inverted covariance matrix B(n), and Bn,n designates an element of the inverted covariance matrix B(n) indexed by (n,n); and k) calculating an element y(n) of the vector y of length n from 1/f-distributed random numbers according to the formula:Y(n)=x(n)*σ+μ,where values of the vector y of the 1/f-distributed random numbers are applied to the input channels of the technical system; and  when n≧d: e) determining elements Cij of a covariance matrix C(n) of dimension (d×d) according to the formula: 26C__ij:=const·(-&LeftBracketingBar;tj-ti&RightBracketingBar;β+1+&LeftBracketingBar;tj-1-ti&RightBracketingBar;β+1+&LeftBracketingBar;tj-ti-1&RightBracketingBar;β+1-&LeftBracketingBar;tj-1-ti-1&RightBracketingBar;β+1),where⁢ ⁢i,j=(n-d+1),…⁢ ,n;f) determining an inverted covariance matrix C−1(n)=B(n) of dimension (d×d); g) determining a variable σ according to the formula:σ=sqrt(1/e(n,n)), where: sqrt is a “square root” function; and e(n,n) designates an element of the inverted covariance matrix B(n) indexed by (n,n); h) determining a (0,1)-normally-distributed random number forming an n-th component of a vector x of length n; j) forming a variable μ from d−1 first components of an n-th row of the inverted covariance matrix B(n) and from d−1 last elements of the vector y calculated for a preceding (n−1)-th simulation time step according to the formula:27μ:=-y(n-1)T·B__·,nB__n,n where: y(n−1) designates the last (d−1) components of the vector y; B•,n designates d−1 first components of an n-th row of the inverted covariance matrix B(n); and Bn,n designates an element of the inverted covariance matrix B(n) indexed by (n,n); and k) calculating an element y(n) of the vector y of length n from 1/f-distributed random numbers according to the formula:Y(n)=x(n)*σ+μ,where values of the vector y of the 1/f-distributed random numbers are applied to the input channels of the technical system.
  • 3. The method according to claim 1, which comprises carrying out step f) by: f1) determining a vector C12T(n) and a matrix C22(n) from the covariance matrix C(n) according to: 28C⁡(n)=(c11⁡(n)c12⁡(n)&RightBracketingBar;⁢c12T⁡(n)C22⁡(n));f2) determining an inverted covariance matrix C−1(n)=B(n) of dimension (d×d) utilizing an inverted matrix C22−1(n−1) by Schur complement techniques; f3) determining a vector b12(n) and a matrix B22(n) from the inverted covariance matrix B(n) according to: 29B⁡(n)=(b11⁡(n)b12⁡(n)&RightBracketingBar;⁢b12T⁡(n)B22⁡(n));andf4) determining an inverted matrix C22−1(n) according to: 30C22-1⁡(n)=(Id-1-b12⁡(n)⁢c12T⁡(n))-1·B22⁡(n)where Id−1 is a unit matrix of dimension ((d−1)×(d−1)).
  • 4. The method according to claim 2, which comprises carrying out step f) by: f1) determining a vector C12T(n) and a matrix C22(n) from the covariance matrix C(n) according to: 31C⁡(n)=(c11⁡(n)c12⁡(n)&RightBracketingBar;⁢c12T⁡(n)C22⁡(n));f2) determining an inverted covariance matrix C−1(n)=B(n) of dimension (d×d) utilizing an inverted matrix C22−1(n−1) by Schur complement techniques; f3) determining a vector b12(n) and a matrix B22(n) from the inverted covariance matrix B(n) according to: 32B⁡(n)=(b11⁡(n)b12⁡(n)&RightBracketingBar;⁢b12T⁡(n)B22⁡(n));andf4) determining an inverted matrix C22−1(n) according to: 33C22-1⁡(n)=(Id-1-b12⁡(n)⁢c12T⁡(n))-1·B22⁡(n)where Id−1 is a unit matrix of dimension ((d−1)×(d−1)).
  • 5. A method for a numerical simulation of a technical system subject to a 1/f noise and having at least one input channel, which comprises: determining a desired spectral value β; determining an intensity constant const; determining a number of random numbers to be generated; defining a start value for a running variable n; defining a window size d; calculating q sequences of random numbers of a 1/f noise simultaneously by: a) incrementing a present value of the running variable n by 1; b) defining a simulation time step [tn−1; tn] and:  when n<d: c) determining elements Cij of a covariance matrix C(n) of dimension (n×n) according to the formula: 34C__ij:=const·(-&LeftBracketingBar;tj-ti&RightBracketingBar;β+1+&LeftBracketingBar;tj-1-ti&RightBracketingBar;β+1+&LeftBracketingBar;tj-ti-1&RightBracketingBar;β+1-&LeftBracketingBar;tj-1-ti-1&RightBracketingBar;β+1), where i,j=1, . . . , n; d) determining an inverted covariance matrix C−1(n)=B(n) of dimension (n×n); g) determining a variable σ according to the formula:σ=sqrt(1/e(n,n)), where: sqrt is a “square root” function; and e(n,n) designates an element of the inverted covariance matrix B(n) indexed by (n,n); h′) determining q (0,1)-normally-distributed random numbers xk,n forming a respective last component of vectors xk of length n, where k=1, . . . , q; i′) forming q variables μk according to the formula: 35μk:=-y(n-1),kT·B__·,nB__n,n where y(n−1),k is n−1 first components of a vector yk calculated for a preceding simulation time step, where k=1, . . . , q; k′) calculating q elements yk,n forming a respective n-th component of the vector yk of length n from 1/f-distributed random numbers according to the formula:yk,n=xk,n*σ+μkwhere k=1, . . . , q; and  when n≧d: e) determining elements Cij of a covariance matrix C(n) of dimension (d×d) according to the formula: 36C__ij:=const.(-|tj-ti⁢|β+1⁢+|tj-1-ti⁢|β+1⁢+|tj-ti-1⁢|β+1⁢-|tj-1-ti-1⁢|β+1), where i,j=(n−d+1), . . . , n; f) determining an inverted covariance matrix C−1(n)=B(n) of dimension (d×d); g) determining a variable σ according to the formula:σ=sqrt(1/e(n,n)), where: sqrt is a “square root” function; and e(n,n) designates an element of the inverted covariance matrix B(n) indexed by (n,n); h′) determining q (0,1)-normally-distributed random numbers xk,n forming a respective last component of vectors xk of length n, where k=1, . . . , q; j′) forming q variables μk according to the formula: 37μk:=-y(n-1),kT·B__•,nB__n,nwhere y(n−1),k is d−1 last components of a vector yk calculated for a preceding simulation time step, where k=1, . . . , q; k′) calculating q elements yk,n forming a respective n-th component of the vector yk of length n from 1/f-distributed random numbers according to the formula:yk,n=xk,n*σ+μkwhere k=1, . . . , q.
  • 6. A method for generating at least one sequence of random numbers approximated to sequences of numbers of a 1/f noise for a numerical simulation of a technical system subject to a 1/f noise on a computer system, which comprises: determining a desired spectral value β; determining an intensity constant const; determining a number of random numbers to be generated; defining a start value for a running variable n; defining a window size d; calculating q sequences of random numbers of a 1/f noise simultaneously by: a) incrementing a present value of the running variable n by 1; b) defining a simulation time step [tn−1; tn] and:  when n<d: c) determining elements Cij of a covariance matrix C(n) of dimension (n×n) according to the formula: 38C__ij:=const.(-|tj-ti⁢|β+1⁢+|tj-1-ti⁢|β+1⁢+|tj-ti-1⁢|β+1⁢-|tj-1-ti-1⁢|β+1), where i,j=1, . . . , n; d) determining an inverted covariance matrix C−1(n)=B(n) of dimension (n×n); g) determining a variable σ according to the formula:σ=sqrt(1/e(n,n)), where: sqrt is a “square root” function; and e(n,n) designates an element of the inverted covariance matrix B(n) indexed by (n,n); h′) determining q (0,1)-normally-distributed random numbers xk,n forming a respective last component of vectors xk of length n, where k=1, . . . , q; i′) forming q variables μk according to the formula: 39μk:=-y (n-1),kT·⁢B__•,nB__n,nwhere y(n−1),k is n−1 first components of a vector yk calculated for a preceding simulation time step, where k=1, . . . , q; k′) calculating q elements yk,n forming a respective n-th component of the vector yk of length n from 1/f-distributed random numbers according to the formula:yk,n=xk,n*σ+μkwhere k=1, . . . , q; and  when n≧d: e) determining elements Cij of a covariance matrix C(n) of dimension (d×d) according to the formula: 40C__ij:=const·(-|tj-ti⁢|β+1⁢+|tj-1-ti⁢|β+1⁢+|tj-ti-1⁢|β+1⁢-|tj-1-ti-1⁢|β+1), where i,j=(n−d+1), . . . , n; f) determining an inverted covariance matrix C−1(n)=B(n) of dimension (d×d); g) determining a variable σ according to the formula:σ=sqrt(1/e(n,n)), where: sqrt is a “square root” function; and e(n,n) designates an element of the inverted covariance matrix B(n) indexed by (n,n); h′) determining q (0,1)-normally-distributed random numbers xk,n forming a respective last component of vectors xk of length n, where k=1, . . . , q; j′) forming q variables μk according to the formula: 41μk:=-y (n-1),kT·⁢B__•,nB__n,nwhere y(n−1),k is d−1 last components of a vector yk calculated for a preceding simulation time step, where k=1, . . . , q; k′) calculating q elements yk,n forming a respective n-th component of the vector yk of length n from 1/f-distributed random numbers according to the formula:yk,n=xk,n*σ+μkwhere k=1, . . . , q.
  • 7. The method according to claim 1, wherein the technical system is an electronic component.
  • 8. The method according to claim 7, wherein the electronic component is a component selected from the group consisting of a pn diode, a MOS field-effect transistor, and a phase-locked loop.
  • 9. The method according to claim 2, wherein the technical system is an electronic component.
  • 10. The method according to claim 9, wherein the electronic component is a component selected from the group consisting of a pn diode, a MOS field-effect transistor, and a phase-locked loop.
  • 11. The method according to claim 5, wherein the technical system is an electronic component.
  • 12. The method according to claim 11, wherein the electronic component is a component selected from the group consisting of a pn diode, a MOS field-effect transistor, and a phase-locked loop.
  • 13. The method according to claim 6, wherein the technical system is an electronic component.
  • 14. The method according to claim 13, wherein the electronic component is a component selected from the group consisting of a pn diode, a MOS field-effect transistor, and a phase-locked loop.
  • 15. A computer-readable storage medium, comprising: a storage storing a computer program executing the steps of the method of claim 1.
  • 16. A computer-readable storage medium, comprising: a storage storing a computer program executing the steps of the method of claim 2.
  • 17. A computer-readable storage medium, comprising: a storage storing a computer program executing the steps of the method of claim 5.
  • 18. A computer-readable storage medium, comprising: a storage storing a computer program executing the steps of the method of claim 6.
  • 19. A computer memory, comprising: a memory area storing a program executing the steps of the method of claim 1.
  • 20. A computer memory, comprising: a memory area storing a program executing the steps of the method of claim 2.
  • 21. A computer memory, comprising: a memory area storing a program executing the steps of the method of claim 5.
  • 22. A computer memory, comprising: a memory area storing a program executing the steps of the method of claim 6.
  • 23. A memory, comprising: a random access memory area storing a program executing the steps of the method of claim 1.
  • 24. A memory, comprising: a random access memory area storing a program executing the steps of the method of claim 2.
  • 25. A memory, comprising: a random access memory area storing a program executing the steps of the method of claim 5.
  • 26. A memory, comprising: a random access memory area storing a program executing the steps of the method of claim 6.
  • 27. A computer system, comprising: a processor; a receiver for receiving an electrical carrier signal, said receiver connected to said processor; and a memory connected to said processor and storing a computer program received as an electrical carrier signal through said receiver, said program executing the steps of the method of claim 1.
  • 28. A computer system, comprising: a processor; a receiver for receiving an electrical carrier signal, said receiver connected to said processor; and a memory connected to said processor and storing a computer program received as an electrical carrier signal through said receiver, said program executing the steps of the method of claim 2.
  • 29. A computer system, comprising: a processor; a receiver for receiving an electrical carrier signal, said receiver connected to said processor; and a memory connected to said processor and storing a computer program received as an electrical carrier signal through said receiver, said program executing the steps of the method of claim 5.
  • 30. A computer system, comprising: a processor; a receiver for receiving an electrical carrier signal, said receiver connected to said processor; and a memory connected to said processor and storing a computer program received as an electrical carrier signal through said receiver, said program executing the steps of the method of claim 6.
  • 31. A data carrier, comprising: a data area storing a program executing the steps of the method of claim 1.
  • 32. A data carrier, comprising: a data area storing a program executing the steps of the method of claim 2.
  • 33. A data carrier, comprising: a data area storing a program executing the steps of the method of claim 5.
  • 34. A data carrier, comprising: a data area storing a program executing the steps of the method of claim 6.
  • 35. A method for simulating a 1/f noise, which comprises: connecting a computer to an electronic data network; downloading a computer program for executing the steps of the method of claim 1; and executing computer program on the computer.
  • 36. The method according to claim 35, wherein the electronic data network is the Internet.
  • 37. A method for simulating a 1/f noise, which comprises: connecting a computer to an electronic data network; downloading a computer program for executing the steps of the method of claim 2; and executing computer program on the computer.
  • 38. The method according to claim 37, wherein the electronic data network is the Internet.
  • 39. A method for simulating a 1/f noise, which comprises: connecting a computer to an electronic data network; downloading a computer program for executing the steps of the method of claim 5; and executing computer program on the computer.
  • 40. The method according to claim 39, wherein the electronic data network is the Internet.
  • 41. A method for simulating a 1/f noise, which comprises: connecting a computer to an electronic data network; downloading a computer program for executing the steps of the method of claim 6; and executing computer program on the computer.
  • 42. The method according to claim 41, wherein the electronic data network is the Internet.
Priority Claims (1)
Number Date Country Kind
100 54 200.3 Nov 2001 DE