Information
-
Patent Grant
-
5859787
-
Patent Number
5,859,787
-
Date Filed
Thursday, November 9, 199529 years ago
-
Date Issued
Tuesday, January 12, 199925 years ago
-
Inventors
-
Original Assignees
-
Examiners
- Elmore; Reba I.
- Rao; Sheela S.
Agents
- Skjerven, Morrill, MacPherson, Franklin & Friel LLP
- Kwok; Edward C.
- Matsubayashi; Hugh H.
-
CPC
-
US Classifications
Field of Search
US
- 364 718
- 364 721
- 364 7241
- 341 61
-
International Classifications
-
Abstract
A method for resampling includes convolving a given set of samples with the impulse response function of a low-pass filter. In this method, values of the impulse response required for the convolution calculation are computed at the time of resampling from a segmented polynomial approximating the impulse response.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to digital signal processing. In particular, the present invention relates to arbitrary-ratio signal resampling techniques in digital signal processing.
2. Discussion of the Related Art
Arbitrary-ratio signal resampling refers to the process which computes sample values of a signal, as if it is sampled at a given rate, using values of that signal sampled originally at a different rate. The original signal is assumed to be bandlimited to one half of the original sampling rate, thereby permitting, as per the well-known Nyquist sampling theorem, unique recovery (i.e. the avoidance of aliasing) of the signal for all time from the original samples.
Arbitrary-ratio signal resampling techniques can be applied, for example, in an audio processing system, in which an input stream is received at a constant sampling rate, and an output stream is required to be generated in real time at a different constant sampling rate. In one application, a pre-recorded digital audio stream originally sampled at a given sampling rate is played back at a different sampling rate dictated by the play-back system. In another application, to mix an audio signal stored in one medium (e.g. digital audio tapes which can be sampled at 32, 44.1 or 48 KHz) with an audio signal from a different source (e.g. a compact disk, which is sampled at 44.1 KHz), arbitrary-ratio resampling techniques must be applied.
Arbitrary-ratio signal resampling techniques can also be applied to create a constant rate output stream from a sound recording being played back at a specific different sampling rate to create such special effects as Doppler shifting and pitch shifting. Pitch shifting is a technique used in sampling wave table music synthesis. Doppler shifting is a technique used in creating such sound effects as a moving sound source.
Further, arbitrary-ratio resampling techniques can also be applied to create a constant sampling rate output stream from a source which sampling rate is either not precisely known in advance, or which may drift. In such application, the sampling ratio must be adjusted in real-time to keep the input and output streams synchronized. This synchronization is called asynchronous resampling, and is used where digital audio sources are produced using independent clocks, as often arises in digital audio mixing consoles and digital stereos. Manufacturing process variations, temperature differences, and power supply variations can all cause identical clock generation circuits to oscillate at slightly different frequencies. In some situations, it may not be feasible to use a single master clock to be a time base for an entire digital audio network, such as when one digital source is a transmitting satellite, and the receiver receiving the signal is on the ground.
One view of the resampling process is provided by the so-called "analog interpretation" as discussed in the text of Crochiere and Rabiner (Multirate Digital Signal Processing, Prentice-Hall Inc., Englewood Cliffs, N.J., 1983). This view is depicted in FIG. 1.
Referring to FIG. 1, an analog signal x(t), assumed bandlimited to a frequency range of 0.5/T.sub.s, is sampled at intervals of T.sub.s to result in a discrete time series x�n!, where x�n! is the sampled value of x(t) at t=nT.sub.s, for integer values of n. The discrete time series can be represented by continuous-time signal 1, which can be expressed as: ##EQU1##
As shown in FIG. 1, signal x(t) passes through an analog low-pass filter 2, which is defined by an impulse response function h(t). The output signal of filter 2 is a signal 3, x(t) which is equal to the convolution of h(t) and x(t), expressed as ##EQU2## In theory, filter 2 can be provided by an ideal low-pass filter with a cutoff frequency of 0.5/T.sub.s, i.e. a filter having an impulse function response of ##EQU3##
The ideal low-pass filter has a perfect "brick wall" frequency response and would provide for ideal signal reconstruction. However, in a practical implementation in which only finite number of terms can be computed, x(t) is typically approximated using a windowed sinc function. Such a windowed since function can be provided by a Hanning or a Kaiser window, for example. An example of an impulse response function, h(t), that is non-zero over a finite range is illustrated by FIG. 3. In FIG. 3, h(t) is zero outside of the range of �-3,3! (in units of T.sub.s)
Resampling is achieved by sampling signal x(t) according to the resampling ratio r=M/N (M and N being relatively prime integers), which is the ratio of the original sampling rate to the new sampling rate. Referring back to FIG. 1, signal x(t) is shown to be provided to a sampling circuit ("sampler") 4, together with the ratio r (indicated in FIG. 1 by reference numeral 5) to provide an output resampled signal x(t), indicated in FIG. 1 by signal 6, given by: ##EQU4##
If the ratio r is less than 1, aliasing will not occur. In this case h(t) may be chosen to be a windowed sinc function with a scaling factor of 1/T.sub.s, i.e. ##EQU5## where w(t) is the windowing function used. To resample the signal x�n! at the new sampling rate it suffices to store values of h(t) at times t=T.sub.0, T.sub.0 +T.sub.s /N, T.sub.0 +2T.sub.s /N, . . . , T.sub.0 +T where �T.sub.0, T.sub.0 +T! is the interval outside of which h(t) is zero. Hence, to achieve this resampling, NT/T.sub.s filter coefficients (i.e. values of the filter's impulse response) must be stored.
For example, if M/N=3/4 and h(t) is the impulse response function of FIG. 3, then values for h(t) would need to be stored for all the original sampling times in the non-zero finite range for h(t) (shown in FIG. 3 as points on the time line marked by a dot) and for the three equally spaced points between these original sampling times (marked in FIG. 3 by an "x" on the time line).
Thus, given a resampling ratio r less than 1, the total number of filter coefficients required to be stored is NT/T.sub.s, where T is the duration ("support") of the finite time range for which h(t) is non-zero (e.g. NT/T.sub.s =24 for the impulse response of FIG. 3). Clearly, the total number of filter coefficients can become impractically large for a large N.
On the other hand, if the resampling ratio is greater than 1, aliasing can be avoided by applying a low pass filter with an impulse response of h'(t)=bh(bt), where h(t) is as defined above and b.ltoreq.N/M. The resulting bandwidth of h'(t) is proportional to b.
Noting that for the minimally attenuated case, b=N/M, the support of h' (t) is T'=T/b=MT/N time units long. In this case, the number of sampling points of h' (t) that need to be stored is N.multidot.T'/T.sub.s =MT/T.sub.s, so that the number of stored filter coefficients is proportional to M. In general, the number of samples of h(t) needed is T/T.sub.s max(N,M). Clearly, the total number of coefficients can become impractically large for large M or N.
Storing these filter coefficients for resampling under a given a resampling ratio is the approach taken by "polyphase" filters. Each group of stored filter coefficients corresponds to one phase filter of a "polyphase" filter. Typically, only one phase is required to provide a resampled output sample at a particular resampling time.
For example, given the impulse response, h(t), of FIG. 3, one required phase filter for resampling at a rate 4/3 the rate of the original sampling would consist of values for h(-9/4), h(-5/4), h(-1/4), h(3/4), h(7/4) and h(11/4), i.e. values for h(t) at time instants 31 of FIG. 3. Application of equation (2) reveals that only this phase filter is required for the calculation of x(3T.sub.s /4) i.e. a sample at the resampling time of t=3T.sub.s /4. On the other hand, the determination of x(3T.sub.s /2) i.e. a sample at the resampling time of t=3T.sub.s /2, according to equation (2), requires another phase filter, namely the one consisting of values for h(-5/2), h(-3/2), h(-1/2), h(1/2), h(3/2) and h(5/2), i.e. values for h(t) at time instants 32 of FIG. 3.
Polyphase filters are described at length in Vaidyanathan (Multirate Systems and filter banks, Englewood Cliffs, N.J.: Prentice-Hall 1993) and in Rabiner and Crochiere, which is referenced above. Depending on the resampling ratio, the basic polyphase filter technique can lead to an impractically large number of stored filter coefficients. Another important shortcoming of the polyphase filter approach arises from the requirement that the resampling ratios implemented in the filter must be ratios of integers. Consequently, the polyphase filter cannot accommodate situations where the resampling ratio varies over time.
An article ("Smith") by Smith and Gossett, entitled "A flexible sampling-rate conversion method", Proc. ICASSP, pp. 19.4.1-19.4.4, 1984, describes a technique that permits resampling at arbitrary times. As in the polyphase filter technique described above, Smith stores in a table samples of the impulse response of a low-pass filter. However, Smith permits values of the impulse response function to be determined at arbitrary times (and hence resampling at arbitrary times) by interpolating between values stored in the table. Smith shows that the number of stored values of the impulse response per original sampling time is approximately: ##EQU6## where n.sub.c is the number of significant bits desired for each stored value. Even then, Smith's approach still results in a storing a large number of samples of the impulse response.
An article ("Adams") by Adams and Kwan, entitled "Theory and VLSI architectures for asynchronous sample--rate converters", J. Audio Engineering Society, vol. 41, July/August 1993, p. 550, describes a similar strategy in a VLSI implementation. In particular, using linear interpolation and storing only every 128th point of a filter that is oversampled by a factor of 2.sup.16 (i.e. having 2.sup.16 coefficients per original sampling period), Adams was able to reduce the filter coefficient ROM table size from 40 megawords to 32K words.
Recently, an article ("Zolzer") by Zolzer and Boltze, entitled "Interpolation algorithms: Theory and application", 97th Audio Engineering Society Convention, Preprint No. 3398, Nov. 1994, describes calculating filter coefficients for resampling in real-time. Zolzer discusses resampling techniques based on polynomial, Lagrange, and spline interpolations. Each of the approaches described in Zolzer involves first calculating an oversampled input sequence, using a standard polyphase filter technique. This first step is then followed by interpolation (either polynomial, Lagrange or spline) among samples of the oversampled input sequence to determine resampling outputs at the desired times.
Zolzer's approach results, for an N.sup.th order interpolation, in a frequency response which is approximated by the function sinc.sup.N+1 (t) (except in the spline case, where it is exact), instead of the sinc frequency response associated with ideal bandlimited interpolation. To compensate for the non-ideal frequency response, Zolzer's resampled output sequence is filtered by a compensation filter using calculated coefficients. However, some distortion in the frequency domain remains (Zolzer's FIG. 13).
SUMMARY OF THE INVENTION
The present invention provides a method for resampling, which convolves a given set of samples with the impulse response function of a low-pass filter. Under this method, values of the impulse response required for the convolution calculation are computed at the time of resampling from a segmented polynomial approximating the impulse response. The present invention is economical because processors capable of computing the impulse response function in real time are becoming more available and less expensive.
The segmented polynomial of the present invention is represented, for each segment of the polynomial, by a number of coefficients. These coefficients are determined by fitting, in a least mean-squared sense, the polynomial to the impulse response function at a large number of points. The cost of computing the polynomial coefficients is not incurred at resampling time, and need only be carried out once and stored in a memory device.
In one application of the present invention, an electronic musical instrument performs resampling of a musical tone at a selected set of time points based on discrete samples of a stored waveform. This resampling technique allows a stored waveform to be used for synthesizing many tones of varying pitches. The samples at the selected set of time points can then be provided at a constant rate to an output device to produce a second musical tone having a pitch proportional to the inverse of the resampling ratio.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 (prior art) shows a model of an analog interpretation of the resampling process.
FIG. 2 shows an electronic musical instrument applying a resampling technique in accordance with the present invention.
FIG. 3 (prior art) shows an impulse response function that is zero-valued outside a given time window.
FIG. 4 shows an apparatus for generating coefficients of a polynomial function that approximates an impulse response function of a low-pass filter.
FIG. 5 illustrates an apparatus for resampling an analog signal using a polynomial function that approximates an impulse response function of a low-pass filter.
FIG. 6 shows in further detail the coefficient generator provided in the apparatus of FIG. 4.
FIG. 7 shows in further detail the output device provided in the apparatus of FIG. 2.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention provides arbitrary-ratio resampling of an analog signal x(t), which is originally sampled at time intervals of T.sub.s to produce samples x�n!, for non-negative integer n, where x�n! denotes the sample at time nT.sub.s and the original sampling interval, T.sub.s, satisfies the Nyquist condition. The goal of arbitrary-ratio resampling is to provide samples of x(t) at arbitrary values of t, given the original samples x�n!.
One view of the reconstruction of x(t) from the given samples is the application of an input analog signal, taking on the values of the original sample at the original sampling times and zero elsewhere, to a low-pass filter whose cutoff frequency is 0.5/T.sub.s and whose impulse response function is h(t). A reconstructed sample value, x(t), is computed as the value of the output of the analog filter at time t (as per equation (2), above), i.e. ##EQU7## h(t) is chosen to be zero except in a finite interval �0,T), in order to reduce the above sum to one having a finite number of terms. Typically, a windowed sinc function is used for h(t).
The resampling technique of the present invention involves the following two steps:
1) the step of calculating coefficients of a segmented polynomial that approximates h(t) in the interval �0,T). (In some embodiments, the segmented polynomial may consist of only one segment, in which case one polynomial function is used to approximate h(t) over the entire interval of �0,T).) This step need only be performed once in advance of the resampling. The coefficients of the segmented polynomial thus calculated are stored in a memory device which is made accessible to the second step below.
2) the step of computing, for each resampling time point, the convolution sum according to equation 2 above. In this embodiment, the required values of h(t) are computed by evaluating, at the required resampling time points, the segmented polynomial, whose coefficients were calculated and stored according to the first step above.
The segmented polynomial of the present invention is obtained by mapping the interval �0,T), to the interval �0, N.sub.s), where N.sub.s is a positive integer. Hence, any time point t in �0, T) will be mapped to a real number s, defined by:
s=tN.sub.s /T=m+f (7)
where m is an integer between 0 and N.sub.s -1, inclusive, and f is the fractional part in the interval �0,1). The impulse response function h(t) at interval �m, m+1) is approximated by a segmented polynomial P.sub.m (f), where P.sub.m is the polynomial corresponding to the mth segment. The set of polynomials P.sub.0,P.sub.1, . . . P.sub.N.sbsb.s.sub.-1, is referred to as a segmented polynomial.
Although not essential for the practice of the present invention, in the technique disclosed below, the fractional part f of real value s is normalized to be a value f', which is a new value between -1 and +1, via the equation
f'=2f-1 (8)
Remapping the polynomial argument of the polynomial range from �-1, 1) instead of �0, 1) results in better dynamic range, given fixed-point, finite-precision coefficients, yielding about a 6-10 dB improvement in accuracy.
Each polynomial P.sub.m is represented by a set {c.sub.m (i)} of coefficients, where coefficient c.sub.m (i) is the coefficient of the i-th order term in the polynomial P.sub.m. The polynomial P.sub.m therefore takes the form ##EQU8## where the argument f' of P.sub.m is normalized over the interval �-1,1), and D is the empirically selected degree of polynomial P.sub.m. D is selected such that P.sub.m approximates the corresponding segment of h(t) to the requisite precision.
A large number N (>>D) of values f' (f'.sub.1, f'.sub.2, . . . f'.sub.N) are selected from the interval �-1, 1) to fit P.sub.m (f') to impulse response function h(t). (In one embodiment, (f'.sub.1, f'.sub.2, . . . f'.sub.N) can be chosen to be uniformly spread over the interval �-1, 1). In other embodiments, more samples may be taken in particular subranges of the interval �-1,1) in order to reduce the error in those subranges.)
Applying equations (7) and (8) above, the value of t (i.e. the argument of the impulse response function h) is related to f'.sub.1 by ##EQU9##
Thus, the following matrix equation is satisfied if P.sub.m (f') exactly fits h(t) at all N values of f':
MC.sub.m =V.sub.m, (9)
where, C.sub.m is a vector formed by polynomial coefficients {c.sub.m (i)}: ##EQU10## matrix M is: ##EQU11## and vector V is: ##EQU12## which is related to the impulse response function h(t) by: ##EQU13##
Since N is selected to be much larger D, equation 9 is overdetermined. Thus, in general, an exact solution does not exist. However, a least mean-squared error solution can be found by finding the pseudo-inverse matrix M.sup..dagger. of M defined by:
M.sup..dagger. =(M.sup.T M).sup.-1 M.sup.T. (14)
The coefficients C.sub.m of P.sub.m, that minimize the mean squared difference (i.e. mean squared error) between P.sub.m (f') and h(t) at f'=(f'.sub.1, f'.sub.2, . . . f'.sub.N), where ##EQU14## are given by the equation:
C.sub.m =M.sup..dagger. V.sub.m (15)
Alternatively, the coefficients C.sub.m, that minimize the mean-squared difference between P.sub.m (f') and h(t) at the fitting points, can also be found by performing Gaussian elimination or LU decomposition followed by back substitution on the system of equations given by:
M.sup.T MC.sub.m =M.sup.T V.sub.m (16)
These solution methods are well-known, and can be found, for example, in the text "Numerical Recipes in C" (2nd ed., Press, Teukolsky, Vetterling & Flannery, Cambridge University Press, 1992).
One advantage of using equation (16) instead of equation (15) to solve for the coefficients C.sub.m is a reduction in the amount of memory required for intermediate results. In order to compute the pseudo-inverse of matrix M, i.e. M.sup..dagger., which is required in equation (15), matrix M, which has N*(D+1) elements, must be stored. On the other hand, matrix M need not be stored if the coefficients C.sub.m of P.sub.m (f') are obtained by solving the system of equation (16). This is because the quantities M.sup.T M and M.sup.T V.sub.m can be computed directly via the following equations (17) & (18) respectively: ##EQU15##
M.sup.T M and M.sup.T V.sub.m have dimensions of (D+1).times.(D+1) and (D+1).times.1, respectively, and thus, require far less storage than M, given that N is selected to be much greater than D.
The number of points, N, which is used to solve for coefficients {C.sub.m (i)} can be as large as desired, without concern for efficiency, given that the coefficient calculation, represented by equations (14) and (15) (or alternatively by equation (16)), need only be performed once. Further, coefficients {C.sub.m (i)} can be computed off-line, so as not to have an impact on the resampling operations. The mean-squared error (MSE) for the n-th segment is then provided by the equation: ##EQU16##
To minimize the MSE to the requisite level of accuracy, either the order D of polynomial P.sub.m, or the number of segments N.sub.s, or both, may be increased, as necessary.
In some embodiments, other metrics may be used also for finding C.sub.m. For example, the error value
.epsilon..sub.m =.parallel.MC.sub.m -V.sub.m .parallel. (21)
may be minimized by using an L.sub..infin. -norm instead of the L.sub.2 -norm used above. This minimizes the absolute error instead of the mean-squared error. The preferred metric for minimization is the mean-square error (L.sub.2) since it reduces the total error energy.
Using the mapping above, a value of h(t) required during resampling is approximated as, P.sub.m (f'), where t is mapped to m and f' by application of equations (7) and (8). A convenient way of mapping t to m and f' is provided by the following steps:
1) compute the quantity s=m+f=t*N.sub.s /T and store s in unsigned fixed point format with x bits (to the left of the floating point) used to represent m (the integer part of s) and y bits (to the right of the floating point) used to represent f (the fractional part of s) in two registers 1 and 2, respectively;
2) shift register 1 to the right by y bits to obtain m in integer format;
3) shift register 2 to the left by x bits to obtain f in unsigned fixed point representation;
4) invert the most significant bit (MSB) of register 2 to obtain f' in signed fixed-point representation in two's complement (where the sign bit is the MSB in register 2 and the fixed point is to the immediate right of the MSB).
FIG. 4 shows the overall process for generating coefficients (c.sub.m.sub.m (0), c.sub.m.sub.m (1), . . . c.sub.m (D)) for a polynomial ##EQU17## to approximate an impulse response function h(t), over the m-th segment of a non-zero range of h(t). A curve fitting step 41 selects points (f'.sub.1, f'.sub.2, . . . f'.sub.N) in the range f' of the polynomial P.sub.m (f'). The corresponding points f'.sub.1, f'.sub.2, . . . f'.sub.N in the time domain of h(t) are t.sub.1, t.sub.2, . . . t.sub.N, respectively. A coefficient generating step 42 computes the coefficients of the polynomial (c.sub.m (0), c.sub.m (1), . . . c.sub.m (D)) that fits, according to some error-minimizing criterion, the values of the polynomial evaluated at points f'.sub.1, f'.sub.2, . . . f'.sub.N, respectively, to the values of the impulse response function h(t) at the times t.sub.1, t.sub.2, . . . t.sub.N. The computed coefficients, C.sub.m, are stored in a coefficient table 44 in a storage device 43. Coefficient generating step 42 computes and stores in coefficient table 44 the coefficients of each segment of the segmented polynomial approximating h(t) (i.e. C.sub.m, O.ltoreq.m<N.sub.S). Of course, coefficient generating step 42 can be implemented either in software or in hardware.
In one embodiment, the fitting criterion used by coefficient generating step 42 is least mean-squared error, as discussed above. For this embodiment, a structure for implementing coefficient generating step 42 (FIG. 4) is illustrated in FIG. 6. Coefficient generating step 42 includes a matrix construction step 61, a matrix construction step 62, a pseudo-inverting step 63 and a matrix multiplication step 64. Matrix construction step 61 forms the matrix M, according to equation (11) above. Pseudo-inverting step 63 receives matrix M after matrix construction step 61 and produces the pseudo-inverse of M (i.e. M.sup..dagger.), according to equation (14) presented above.
Matrix construction step 62 forms the N.times.1 matrix V, where V�i!=h(t.sub.i), 1.ltoreq.i.ltoreq.N. Matrix multiplication step 64 receives matrices M.sup..dagger. and V from pseudo-inverting step 63 and matrix construction step 62, respectively, and produces the product C.sub.m =M.sup..dagger. V, where C.sub.m is a (D+1).times.1 matrix and C.sub.m �i!=c.sub.m (i), 0.ltoreq.i.ltoreq.D.
The resampling step is discussed with reference to FIG. 5. A sampling step 52 samples an analog signal 51 at an original set of sampling time points to produce a set of samples 53. A resampling step 54 includes a convolving step 55 which convolves, according to equation 2, samples 53 with values of the impulse response function approximated by an impulse response approximation step 57 to produce a set of samples 56 for the resampling time points. Impulse response approximation step 57 approximates the impulse response by evaluating the coefficients of the segmented polynomial, which are stored in coefficient table 44 of storage device 43. Again, resampling step 54 can be implemented either in software or in hardware.
Given a target sampling period of rT.sub.s (i.e. a resampling ratio of r), the k-th output sample occurs at time (in units of T.sub.s)
t.sub.out,k =krT.sub.s (22)
Because the "brightness" of a filter depends on the duration of its non-zero impulse response, a common way of controlling this brightness is to scale the time axis of the filter, using a brightness factor b:
h'(t)=h(bt). (23)
However, this modification of h(t) results in a scaling of the DC response of the filter, since the DC gain is inversely proportional to b. Therefore, if constant "loudness" is desired, the filter may be modified to preserve power by scaling:
h'(t)=bh(bt), (24)
thereby preserving as constant the DC response. Thus, we see that larger values of the brightness factor lead to narrower impulse responses, hence widening the frequency response.
When r>1 (i.e. when the new sampling rate is less than the original sampling rate), in order to avoid aliasing, the impulse response function, h(t), used for resampling must correspond to a low-pass filter whose cutoff frequency is chosen to be less than half the resampling rate, and having a brightness factor b no greater than 1/r.
Applying equation (2) to a low-pass filter with impulse response of h' (t)=bh(bt), the following expression for x(t) is obtained given that h(t) is zero outside the interval �0,T) and assuming x�n!=0 for n<0 and h(t) is symmetric (i.e. h(t)=h(T-t)), as typically of windowed sinc functions: ##EQU18## If x(t) is time-shifted forward by T/b time units, the right hand side of equation (29) becomes ##EQU19## (This form is equivalent to the convolution of x(t) and g(t)=h(-t), and is adopted for convenience.) ##EQU20## x(t) must be sampled in the discrete time domain at the new sampling rate 1/(rT.sub.s). Using equation 20 above, the nth resampled value y�n! is given by: ##EQU21## h(bT.sub.s (k-frac(nrT.sub.s))) is approximated by P.sub.m (f') where equations (7) and (8) are applied to map t=bT.sub.s (k-frac(nrT.sub.s)) to m and f', and the coefficients for P.sub.m were calculated and stored in the manner provided above.
In one embodiment P.sub.m (f') can be calculated on a signal processor VSP, available from Chromatic Research, Inc., Mountain View, Calif., which includes two functional units, FU1 and FU2, capable of generating in one cycle partial results from (i) the multiplication of two double word operands a & b, and (ii) adding the partial results to another double word operand c. Thus, even though the latency in calculating the quantity ab+c requires two cycles, using pipelining, an effective throughput rate of one such calculation (i.e. multiplication followed by an addition, hereinafter "multiply-add" operation) can be achieved per cycle.
Assuming D (the degree of the segmented polynomial) equals 4, polynomial P.sub.m (f') can be expressed as follows:
P.sub.m (f')=((((c.sub.m (4)f')+c.sub.m (3))f'+c.sub.m (2))f'+c.sub.m (1))f'+c.sub.m (0)
Thus, the calculation of this polynomial P.sub.m (f') requires four multiply-add operations. If the value of f' or each polynomial coefficient is packed in a half-word and since FU1 and FU2 are capable of performing four parallel operations on corresponding halfwords of double word operands, an effective rate of one computation of an approximated impulse response value per cycle can be achieved.
In one embodiment, the convolution of equation 33 to obtain a first resampled value is computed on a processor having three functional units FU1, FU2, and FU3:
1) FU1 computes the four products of corresponding halfwords of two double word operands A (storing 4 half-word approximated values of h) and B (storing 4 half-word original sample values, x).
2) FU2 is used to accumulate the products produced in (1) in the corresponding halfwords of a double word accumulator.
The combination of (1) and (2) (hereinafter a "multiply-accumulate" operation) requires two cycles. However, through pipelining, an effective rate of one multiply-accumulate operation per cycle can be achieved. Each multiply-accumulate operation results in the accumulation of four of the product terms in the convolution of equation 33. Assuming for the purposes of this example that this convolution operation involves twenty four terms, six multiply-accumulate operations are performed, after which each halfword of the accumulator contains a respective one quarter of the terms in the convolution of equation 33. The double word in the accumulator is stored in a memory location DW1.
The above steps are applied to obtain second, third and fourth resampled values, with the results being stored in memory at locations DW2, DW3, and DW4. In order to obtain the i-th resampled value (i=1, 2, 3, 4) according to equation 33, the sum of DWi.sub.1, DWi.sub.2, DWi.sub.3 and DWi.sub.4 must be formed, where DWi.sub.j denotes the jth halfword of DWi. One way of obtaining the required sums is as follows:
a) use FU3 to perform a permutation of the sixteen bytes contained in the concatenation of DW1 and DW2 to obtain the concatenation of DW5 and DW6 where DW5=<DW1.sub.1, DW2.sub.1, DW1.sub.2, DW2.sub.2 > and DW6=<DW1.sub.3, DW2.sub.3, DW1.sub.4, DW2.sub.4 >. FU3 contains a cross-bar switch that can achieve an arbitrary permutation of the bytes in a sixteen-byte operand in one cycle.
b) use FU2 to add corresponding halfwords of DW5 and DW6, thereby obtaining DW7=<�DW1 .sub.1 +DW1.sub.3 !, �DW2.sub.1 +DW2.sub.3 !, �DW1.sub.2 +DW1.sub.4 !, �DW2.sub.2 +DW2.sub.4 !>(where � ! encloses a halfword quantity).
c) use FU3 to perform a permutation of the sixteen bytes contained in the concatenation of DW3 and DW4 to obtain the concatenation of DW8 and DW9 where DW8=<DW3.sub.1, DW4.sub.1, DW3.sub.2, DW4.sub.2 > and DW9=<DW3.sub.3, DW4.sub.3, DW3.sub.4, DW4.sub.4 >.
d) use FU2 to add corresponding halfwords of DW8 and DW9, thereby obtaining DW10=<�DW3.sub.1 +DW3.sub.3 !, �DW4.sub.1 +DW4.sub.3 !, �DW3.sub.2 +DW3.sub.4 !, �DW4.sub.2 +DW4.sub.4 !>.
e) use FU3 to perform a permutation of the sixteen bytes contained in the concatenation of DW7 and DW10 to obtain the concatenation of DW11 and DW12, where DW11=<�DW1.sub.1 +DW1.sub.3 !, �DW2.sub.1 +DW2.sub.3 !, �DW3.sub.1 +DW3.sub.3 !, �DW4.sub.1 +DW4.sub.3 !>, and DW12=<�DW1.sub.2 +DW1.sub.4 !, �DW2.sub.2 +DW2.sub.4 !, �DW3.sub.2 +DW3.sub.4 !, �DW4.sub.2 +DW4.sub.4 !>.
f) use FU2 to add corresponding halfwords of DW11 and DW12, thereby obtaining DW13=<�DW1.sub.1 +DW1.sub.2 +DW1.sub.3 +DW1.sub.4 !, �DW2.sub.1 +DW2.sub.2 +DW2.sub.3 +DW2.sub.4, �DW3.sub.1 +DW3.sub.2 +DW3.sub.3 +DW3.sub.4 !, �DW4.sub.1 +DW4.sub.2 +DW4.sub.3 +DW4.sub.4 !>. Thus, DW13.sub.i holds the ith resampled value.
The above disclosed technique uses a segmented polynomial to compute values of an impulse response at the time of resampling. The segmented polynomial provides various advantages including the following:
1) The storage requirements are minimal (i.e. a few coefficients for each segmented polynomial), compared to the storage requirements of Smith's technique discussed above. For example, the technique disclosed herein would require 240 bytes to store polynomial coefficients if a segmented polynomial consisting of 24 fourth degree polynomial segments were used to approximate the desired impulse response function, where two bytes of storage are allocated to each polynomial coefficient. In contrast, if 16 bits of accuracy in the stored values of the impulse response function is desired, Smith's technique, discussed above, would require that the number of stored values of the impulse response per original sampling time be approximately (2.sup.16).sup.1/2 =256. Assuming 24 original sampling time points, the total storage requirements would be 256.times.24.times.2 bytes/stored value=12228 bytes. Thus, the technique disclosed herein would result in a 50-fold reduction in storage requirements.
As processors become faster, it is becoming more cost-effective to compute filter responses in real time, than to incur the costs of memory latency, resulting from accessing large banks of polyphase filter coefficients, and the cost of memory devices to store them. Because of the large number of polyphase filter coefficients that are required, in Smith's approach, memory latency may be even further aggravated because of the higher likelihood of cache misses, which lead to additional delays. In contrast, the minimal storage required for the segmented polynomial coefficients results are more likely to result in higher cache hits, and hence lower latency resulting from efficient use of the cache memory.
2) The stored polynomial coefficients are independent of the resampling ratio and hence the technique of the present invention is applicable to applications in which the resampling ratio varies over time.
3) The segmented polynomials are fitted to the selected windowed sinc function in a least-mean-square sense. As a result, the frequency response associated with the technique is a good approximation of an ideal bandlimited interpolator and avoids frequency distortions of the type associated with Zolzer's interpolation methods.
4) Zolzer's extra step of polyphase upsampling is avoided.
The resampling technique described above can be used, for example, to provide tones of various pitches from stored sounds. To synthesize a sound, an electronic instrument typically store samples of sounds (i.e. stored waveforms) of various pitches, timbres and velocities. In order to maintain reasonable memory requirements, it is often infeasible to store samples for sounds of all possible pitches that can be produced by the instrument.
In such instruments, a mechanism for varying the pitch of a sound associated with a stored waveform is required. One such mechanism involves varying the rate at which samples in a stored waveform are output from memory. This technique has the disadvantage of introducing additional complexity into the digital-to-analog conversion circuitry receiving and processing the stored waveform samples, and prevents digital mixing of several sound streams. An alternate solution that avoids the complexity associated with a variable sample output rate, is employed in instruments with a fixed rate of sample supply to the digital-to-analog tone producing circuitry. In one such instrument, discussed in U.S. Pat. No. 5,290,965, entitled "Asynchronous Waveform Generating Device For Use In An Electronical Musical Instrument", by Yoshida et al, issued Mar. 1, 1994, the samples supplied to the digital-to-analog tone producing circuitry can represent values of the originally sampled musical tone at times other than those associated with the original sample values in the stored waveform.
FIG. 2 shows an electronic instrument implementing the sound synthesis technique of the present invention with a fixed sample output rate. As shown in FIG. 2, a musical tone 201 of pitch P.sub.1, x(t), is sampled at the times iT.sub.s, 0.ltoreq.i.ltoreq.S thereby resulting in samples x�0!, x�1!, . . . , x�S! which are stored in a storage device 203 as a stored waveform 204. (In other embodiments, a sound other than a musical tone could be sampled and stored as stored waveform 204.)
The pitch of the tone output signal of output device 212 is controlled by the value of a pitch factor, which is provided to resampler 206 (which performs a convolution step 207 and an impulse response approximating step 208) at pitch selection step 205. In particular, if a pitch factor r is specified by the pitch designating step 205, then resampler 206 generates a set of samples 209, y�n!, of the originally sampled musical tone x(t), corresponding to t=nrT.sub.s, 0.ltoreq.n.ltoreq.�S/r!. One sample generated by resampler 206 is supplied to output device 212 every T.sub.c seconds, where T.sub.c is the output sampling rate. To prevent aliasing, the originally sampled musical tone is selected to be sufficiently band-limited to accommodate fully the anticipated range of upshifting of pitch. T.sub.c may or may not be equal to T.sub.s.
FIG. 7 shows further detail of an output device 212, in accordance with one embodiment of the present invention. In this embodiment, output device 212 includes a digital-to-analog converter (DAC) 701, an amplifier 702 and a speaker 703. DAC 701 receives samples 209 and produces a corresponding continuous time signal 704. Amplifier 702 receives continuous time signal 704 from DAC 701 and produces an amplified continuous time signal 705. Speaker 703 receives amplified continuous time signal 705 from amplifier 702 and produces musical tone 213.
Given samples 209, output device 212 synthesizes a musical tone 213 of pitch P.sub.2 =P.sub.1 *rT.sub.s /T.sub.c. In order to prevent aliasing, as discussed above, the highest frequency of interest in the sampled musical tone 201 should be less than 0.5/rT.sub.s, where r is the maximum resampling factor that can be specified, and the frequencies above 0.5/rT.sub.s should be filtered from the tone to create the stored waveform. In order to calculate a sample value, y�n!, for the originally sampled musical tone 201, x(t), at a time t=nrT.sub.s, techniques presented above, and in particular equation 33, can be applied, where the required values of impulse function h(t) are computed through use of a segmented polynomial as described above. A convolution step 207 convolves, according to equation 33 the samples in stored waveform 204 with approximated values of h(t) supplied by impulse response approximation step 208. Impulse response approximation step 208 generates approximated values of h(t) by evaluating a segmented polynomial whose coefficients as stored in a storage device 211 as a coefficient table 210. Resampler 206 could be implemented in either software or hardware.
The above detailed description is provided to illustrate specific embodiments of the present invention and is not intended to be limiting. Numerous modifications and variations within the scope of the present invention are possible. The present invention is defined by the following claims.
Claims
- 1. A method of computing values of an analog signal at a first set of times, given values of the analog signal sampled at a second set of times, comprising the step of convolving the given sample values with values of an impulse response function of a filter, wherein the impulse response function is approximated by a segmented polynomial function and the values of the impulse response function are computed by evaluation of the segmented polynomial, wherein the segmented polynomial function consists of N.sub.S polynomials, P.sub.0, P.sub.1, . . . P.sub.N.sbsb.S.sub.-1, and the value of an impulse response function at a time t, t being less than a constant T, is approximated by P.sub.m (f), m and f being integer and fractional parts, respectively, of tN.sub.S /T.
- 2. A method of computing values of an analog signal at a first set of times, given values of the analog signal sampled at a second set of times, comprising the step of convolving the given sample values with values of an impulse response function of a filter, wherein the impulse response function is approximated by a segmented polynomial function and the values of the impulse response function are computed by evaluation of the segmented polynomial, wherein the segmented polynomial function consists of N.sub.S polynomials, P.sub.0, P.sub.1, . . . P.sub.N.sbsb.S.sub.-1, and the value of the impulse response function at a time t, t being less than a constant T, is approximated by P.sub.m (f'), m being an integer part of tN.sub.S /T and f' equaling 2f-1, f being a fractional part of tN.sub.S /T.
- 3. The method of claim 1, wherein the time interval between consecutive elements of the first set of times is not constant.
- 4. The method of claim 2, wherein the time interval between consecutive elements of the first set of times is not constant.
- 5. A method of generating coefficients, c(0), C(1), . . . c(D), of a polynomial function, ##EQU22## having an argument x, the argument x having a range from -1 to 1 and the polynomial function being used to approximate the impulse response function of a filter, h(t), over a time range, each value in the time range of h(t) corresponding to a point in the argument range of the polynomial function, h(t) beings approximated by ##EQU23## where f' is the point in the argument range corresponding to point t in the time range, comprising the steps of:
- selecting a set of points in the rang)re of the argument; and
- generating the coefficients, c(0), c(1), . . . c(D), of the polynomial function so as to fit, according to a criterion, the values of the polynomial function at the selected set of points in the range of the argument to the values of the impulse response function at the set of points in the time range corresponding to the selected set of points in the range of the argument.
- 6. The method of claim 5, further comprising the step of storing the coefficients in a storage device, wherein the stored coefficients are accessed by an apparatus that performs resampling.
- 7. A method of generating coefficients, c(0), C(1), . . . c(D), of a polynomial function, ##EQU24## having an argument x, the argument x having a range and the polynomial function being used to approximate the impulse response function of a filter, h(t), over a time range, each value in the time range of h(t) corresponding to a point in the argument range of the polynomial function, h(t) being approximated by ##EQU25## where f' is the point in the argument range corresponding to point t in the time range, comprising the steps of:
- selecting a set of points in the range of the argument; and
- generating the coefficients, c(0), c(1), . . . c(D), of the polynomial function so as to fit, according to a criterion, the values of the polynomial function at the selected set of points in the range of the argument to the values of the impulse response function at the set of points in the time range corresponding to the selected set of points in the range of the argument;
- wherein the criterion is least mean-squared error, the selected points in the range of the argument are f'.sub.1, f'.sub.2, f'.sub.N, the points in the time range corresponding to f'.sub.1, f'.sub.2, . . . f'.sub.N are t.sub.1, t.sub.2, . . . t.sub.N, respectively, and the step of generating the coefficients comprises the steps of:
- forming an N.times.(D+1) matrix M, the element of M in the jth row and ith column, M, being equal to (f'.sub.1).sup.i, 1.ltoreq.i.ltoreq.N, 0.ltoreq.j.ltoreq.D;
- forming an N.times.1 matrix V, the element of V in the ith row, V, being equal to h(t.sub.i), 1.ltoreq.i.ltoreq.N; forming M.sup..dagger., the pseudo-inverse of matrix M; and forming the product M.sup..dagger. V, thereby producing a (D+1).times.1 matrix C, where C=c(i), 0.ltoreq.i.ltoreq.D.
- 8. The method of claim 7, further comprising the step of storing the coefficients in a storage device, wherein the stored coefficients are accessed by an apparatus that performs resampling.
- 9. A method of generating coefficients, c(0), C(1), . . . c(D), of a polynomial function, ##EQU26## having an argument x, the argument x having a range and the polynomial function being used to approximate the impulse response function of a filter, h(t), over a time range, each value in the time range of h(t) corresponding to a point in the argument range of the polynomial function, h(t) being approximated by ##EQU27## where f' is the point in the argument range corresponding to point t in the time range, comprising the steps of:
- selecting a set of points in the range of the argument; and
- generating the coefficients, c(0), c(1), . . . c(D), of the polynomial function so as to fit, according to a criterion, the values of the polynomial function at the selected set of points in the range of the argument to the values of the impulse response function at the set of points in the time range corresponding to the selected set of points in the range of the argument;
- wherein the criterion is least mean-squared error, the selected points in the range of the argument are f'.sub.1, f'.sub.2, . . . f'.sub.N, the points in the time range corresponding to f'.sub.1, f'.sub.2 f'.sub.N are t.sub.1, t.sub.2, . . . t.sub.N, respectively, and the step of generating the coefficients comprises the step of:
- solving the system of equations given by M.sup.T MC=M.sup.T V for C, where M is an N.times.(D+1) matrix, the element of M in the ith row and jth column, M, being equal to (f'.sub.i).sup.j, 1.ltoreq.i.ltoreq.N, 0.ltoreq.j.ltoreq.D, M.sup.T is the transpose of M, C is a (D+1).times.1 matrix, the element of C in the ith row, C, equalling c(i), 0.ltoreq.i.ltoreq.D, and V is an N.times.1 matrix, the element of V in the ith row, V, equalling h(t.sub.i), 1.ltoreq.i.ltoreq.N.
- 10. The method of claim 9, further comprising the step of storing the coefficients in a storage device, wherein the stored coefficients are accessed by an apparatus that performs resampling.
US Referenced Citations (10)