The present invention relates to the field of digital signal processing, in particular to designing finite impulse response (FIR) filters.
As is well known, an electromagnetic receiver is an electronic device that receives electromagnetic waves in a certain range of frequencies and converts the information carried by these waves into some kind of a usable form. For example, a receiver that is typically referred to as a “radio receiver” receives electromagnetic waves in the radio range of approximately 3 kiloHertz (kHz) to 300 gigaHertz (GHz). All receivers use antennas to capture the waves and convert them to alternating current (AC) signals, and electronic filters to separate the signals in the desired band of frequencies from all other signals that may be captured by the antenna. In context of receivers, different bands of frequencies are sometimes referred to as “channels.”
Selectivity performance of a receiver refers to a measure of the ability of the receiver to separate the desired range of frequencies (referred to as a “passband ΩP” of frequencies ω) from unwanted interfering signals received at other frequencies (referred to as a “stopband ΩS” of frequencies ω). In other words, selectivity defines how effectively a receiver can respond only to the signal of interest that it is tuned to (i.e., signal in the desired band of frequencies) and reject signals in other frequencies.
Filters can be classified in different groups, depending on which criteria are used for classification. Two major types of digital filters are finite impulse response (FIR) digital filters and infinite impulse response (IIR) digital filters, with each type having its own advantages and disadvantages.
An FIR filter is designed by finding coefficients and filter order that meet certain specifications. In other words, in a filter design setting, “filter design” refers to determining a filter order N and determining values of (N+1) coefficients h[n] of a filter that would approximate the ideal response defined by the specifications both in the passband and in the stopband. In this context, a filter order N is a positive integer, and, for each coefficient, n is an integer of a sequence of consecutive integers from 0 to N (i.e. n=0, 1, . . . , N). Thus, for example, for a second-order filter (i.e. N=2), coefficients may be denoted as h[0], h[1], and h[2].
The specifications of an ideal response that a filter being designed should meet are typically expressed based on the desired selectivity performance of a receiver. Such specifications could be defined in terms of e.g. a frequency response H(ejω) (i.e. a Fourier transform of the impulse response h[n]) provided with the passband ΩP and the stopband ΩS of frequencies to approximate the desired magnitude response D(ω):
A further specification could include a desired weight function Wdes(ω) (where the subscript “des” is an abbreviation for “desired”), specifying the relative emphasis on the error in the stopband as compared to the passband. More specifically, the weight requirement could be expressed as
where Kdes is a positive scalar given as a part of the filter specifications. Providing a weight greater than unity on the stopband places an emphasis on having a better approximation to the ideal response in the stopband (i.e. the designed filter should adequately suppress the frequencies of the stopband).
Many FIR filter design methods exist, such as e.g. windowing design method, frequency sampling method, weighted least squares design, Parks-McClellan method, etc., all of which attempt to arrive at the filter coefficients h[n] of a filter that best approximates an ideal filter response provided by the specifications. Some of these methods can guarantee that for a given value of filter order N, and certain conditions imposed on h[n], the result is the best approximation possible for these conditions. For example applying a rectangular window of size N+1 to the ideal filter response results in the best approximation in terms of mean-squared error optimality criterion. Another example is that if the filter is restricted to be symmetric around its mid-index, Parks-McClellan filter design method yields the best minimax approximation, i.e. minimizes the maximum error. However, finding coefficients for FIR filters with non-linear phase characteristics, i.e. for the most general form of FIR filters where the phase response of a filter may be a non-linear function of frequency, remains challenging if the optimality criterion is minimizing the maximum error. Improvements could be made with respect to addressing one or more of these issues.
To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:
Challenges with Minimax Optimality for Generic FIR Filters
Minimax optimality criterion is concerned with minimizing the maximum value of the error. In a filter design setting (i.e. when coefficients of a filter are being computed), a designed filter will typically approximate the ideal response in both the passband and the stopband to a certain extent. Furthermore, it is possible to apply a certain weight on the stopband so that more emphasis is placed on having a better approximation in the stopband. The maximum value of the absolute value of the weighted error on the entire frequency range including the stopband and the passband is called the l∞ norm of the weighted error (l∞ pronounced as “ell-infinity”). Minimax filter design refers to the process of finding filter coefficients of a certain order that will minimize the l∞ error, i.e. minimize the maximum weighted error encountered in the entire frequency range.
In order to further explain how a minimax optimal filter is evaluated, a hypothetical filter, A, may be considered. For the hypothetical filter A, the weighted error may swing between −0.01 and 0.01 everywhere on the frequency axis, except that at the frequency ω=0.30π the value of the weighted error is 0.20. The l∞ norm of this error function is 0.20. Another hypothetical filter, B, may also be considered. For the hypothetical filter B, the error may swing between −0.19 and 0.19 everywhere but never gets larger than this. The l∞ norm of this error is 0.19. Even though filter A approximates the ideal response much better than filter B at almost every frequency, the minimax error criterion favors filter B over filter A, because filter B is “safer” for the worst case scenario (which, in this exemplary illustration, would happen if the entire input was concentrated at ω=0.30π). Thus, a minimax design can be viewed as being prepared for the worst case.
Linear phase filters (i.e. filters for which phase characteristic is a linear function of frequency) have certain symmetries around their mid-point, which allows their frequency response to be written as a real-valued zero-phase response multiplied by a linear phase term. Since the alternation theorem and the Remez exchange algorithm deal only with real-valued functions, they can be directly applied to characterize or design globally minimax-optimal linear phase filters by only considering the real part of the frequency response. A generic FIR with no such symmetry constraints would allow more flexibility in choosing its coefficients and can be more advantageous than linear phase filters. However, since the frequency response of a non-linear FIR filter may not necessarily be expressed as a real-valued function multiplied by a linear phase term as is done for linear phase filters, the alternation theorem and the Remez Exchange method cannot be applied for globally minimax-optimal design of generic filters which may include non-linear filters. There have been approaches in the literature to carry the design problem to the domain of the autocorrelation of the filter instead of the domain of the filter itself, an idea that is also utilized in the present disclosure. However, these approaches in the literature do not adhere to a well-defined optimality criterion in the filter domain. Rather, these approaches find an optimal filter in the autocorrelation domain, which, when converted back to the filter domain, may not remain optimal or may not exhibit the desired emphasis on attenuation in the stopband versus the passband specified by the weight function Wdes.
Embodiments of the present disclosure provide mechanisms that enable implementing a digital filter that may improve on one or more problems described above, in particular with respect to designing an FIR filter that would have a guaranteed globally optimal magnitude response in terms of the minimax optimality criterion. Designing and then applying such a filter to filter input signals provides an advantageous technological solution to a problem of suboptimal conventional filters (i.e. a problem rooted in technology).
Design of such a filter is based on a practical application of a theorem derived by the inventor of the present disclosure. The theorem may be referred to as a “characterization theorem” to reflect the fact that it provides an approach for characterizing the global minimax optimality of a given FIR filter h[n], n=0, 1, . . . , N, where the optimality is evaluated with respect to the magnitude response of this filter, |H(ejω)|, as compared to the desired filter response, D(ω), which is unity in the passband ΩP and zero in the stopband ΩS. In particular, the characterization theorem allows evaluating whether a given filter has a magnitude response that is the best approximation to D(ω) in that no other magnitude response can be achievable for the same order of FIR filters that would attain a smaller infinity norm on the weighted error function Wdes(ω)(|H(ejω)|−D(ω)). The characterization theorem enables characterizing optimality for both real-valued and complex-valued filter coefficients, and does not require any symmetry in the coefficients, thus being applicable to all non-linear phase FIR filters.
In turn, observations from the characterization theorem enable an efficient method, as described herein, for designing non-linear phase FIR filters in cases where only the magnitude response is specified and the phase is not constrained. Such a method is referred to herein as a “FIR filter design method.” While the FIR filter design method is not restricted to a particular phase response, it, nevertheless, advantageously allows to choose a desired phase design, e.g. minimum phase design, without compromising global optimality of the filter with respect to magnitude. The next section sets forth the characterization theorem, provided for sake of completeness in terms of mathematical proof of the optimality of the FIR filter design method proposed herein. After that, the filter design method is described.
As will be appreciated by one skilled in the art, aspects of the present disclosure, in particular the FIR filter design approach described herein, may be embodied in various manners—e.g. as a method, a system, a computer program product, or a computer-readable storage medium. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” At least some functions described in this disclosure may be implemented as an algorithm executed by one or more processing units, e.g. one or more microprocessors, of one or more computers. In various embodiments, different steps and portions of the steps of each of the methods described herein may be performed by different processing units. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s), preferably non-transitory, having computer readable program code embodied, e.g., stored, thereon. In various embodiments, such a computer program may, for example, be downloaded (updated) to the existing devices and systems (e.g. to the existing filter modules, electromagnetic receivers or controllers of such filters or receivers, etc.) or be stored upon manufacturing of these devices and systems.
Other features and advantages of the disclosure are apparent from the following description, and from the claims and examples.
Characterization Theorem
Assume that a filter with coefficients h[n], n=0, 1, . . . N, and a frequency response H(ejω) is provided with the passband ΩP and the stopband ΩS of frequencies to approximate the desired magnitude response D(ω) defined by equation (1) above.
Further assume a desired weight function Wdes(ω) specifying the relative emphasis on the suppression of error in the stopband as compared to the passband by equation (2) above. The scalar Kdes may be given as a part of design specification (i.e. be provided as an input to the FIR filter design method/algorithm described herein).
The weighted error function EW(ω) achieved by the designed filter h[n], and the bounds on passband error δP (i.e. the maximum (absolute) deviation of |H(ejω)| from unity in the passband) and stopband error δS (i.e. the maximum (absolute) deviation of |H(ejω)| from unity in the stopband) may be defined as
E
W(ω)=Wdes(ω)(|H(ejω)|−D(ω)), (3)
δP=maxωε(Ω
and
respectively.
The characterization theorem may then be formulated as follows:
E′
W(ω)=W′des(ω)(|H(ejω)|−D′(ω)) (6)
As is known in the art, “alternations” are defined as the frequency points at which the weighted error function attains its extremal values, where an extremal value is considered to be an alternation if its sign is opposite of the previous extremal value and its magnitude is equal to the magnitude of the previous extremal value (a positive alternation is followed by a negative alternation with the same magnitude and vice versa). In general, a term “extremal value” refers to a local minimum or a local maximum value, i.e. a point with a lower or larger value than its neighboring points, respectively. The optimality in linear phase filters are characterized by counting the alternations with respect to the weighted error function computed using the desired response and the provided desired weight function. The characterization provided herein for the magnitude response, as opposed the frequency response, deals with adjusted weighted error function as described in equation (6) above. Such an adjustment can be mathematically proven, but already intuitively such an adjustment appears to be needed because magnitudes that can never go below zero. The adjusted desired/target magnitude response D′(ω) in the passband may be chosen as the midpoint of the error band [1−δP, 1+δP] while the adjusted desired/target magnitude response D′(ω) in the stopband may be chosen as the midpoint of the error band [0, δS].
The number of required alternations for filters with complex-valued coefficients are larger than that of filters with real-valued coefficients. This also is consistent with intuition because it reflects the additional degrees of freedom in choosing the filter coefficients by relaxing the constraint to be real-valued, and can also be mathematically proven.
Proposed FIR Filter Design
In the following, design algorithm for filters restricted to having real-valued coefficients are described. Therefore, (N+2) alternations are required. However, reasoning provided below applies for filters with complex-valued coefficients simply by requiring (2N+2) alternations, i.e. by replacing all occurrences of (N+2) with (2N+2).
The observations and method described herein will refer to N+2 alternations assuming h[n] is restricted to have real-valued coefficients for simplicity of arguments, while the same arguments will apply to the case for complex-valued coefficients with 2N+2 alternations. The characterization theorem formulated above requires the adjusted weighted error E′W(ω) to have at least N+2 alternations in total over the passband and stopband. Due to the sufficiency for unique optimality, it is possible to proceed to find a filter that actually satisfies N+2 alternations in the E′W(ω). While doing this, the fact that the frequencies at which alternations occur in E′W(ω) are also the alternation frequencies for the adjusted weighted error E′P(ω) for the frequency response P(ejω) of the autocorrelation function (i.e. Fourier transform of p[n]) can be used, where error E′P(ω) is defined similarly to E′W(ω). More specifically, if |H(ejω)| attains its extremal value at a specific frequency and hence form an alternation in E′W(ω), then the Fourier transform of the autocorrelation function P(ejω)=|H(ejω)|2 will also attain its extremal value and form an alternation in error E′P(ω) at the same frequency. Due to the specific choices of weighted errors E′W(ω) and E′P(ω) for |H(ejω)| and P(ejω), respectively, they have an equal number of alternations.
Establishing that E′W(ω) and error E′P(ω) have the same number of alternations advantageously enables carrying out the design into the autocorrelation domain, obtaining an autocorrelation function that satisfies the required number of alternations, and recovering the filter coefficients that will accept this function as its autocorrelation function. Designing an autocorrelation function that will have enough alternations is much easier than designing the original filter because the autocorrelation function is a zero-phase sequence. More specifically the autocorrelation p[n] of an Nth order filter h[n] is of length 2N+1; even-symmetric if h[n] has real-valued coefficients, or conjugate-symmetric if h[n] has complex-valued coefficients. This allows its Fourier transform P(ejω) to be expressed as a real-valued function that is a linear combination of cosines if even-symmetric, or a linear combination of sines and cosines if conjugate-symmetric. In both cases, the alternation theorem and the Remez Exchange algorithm, as well as many other efficient algorithms known in the art, can be used to successfully characterize and design the optimal autocorrelation sequence.
Designing a zero-phase sequence that approximates an ideal filter response and treating that as the autocorrelation of an FIR filter has been used as a non-linear phase FIR filter design method in the past. However, since the design specifications such as relative weight on stopband versus passband in the autocorrelation domain do not remain the same for the filter due to the squaring relationship between P(ejω) and |H(ejω)|, the resulting filter does not necessarily reflect the desired weight. Furthermore, no optimality arguments are available for the final design because the optimality of the autocorrelation sequence for one set of metrics does not make the corresponding filter optimal for the same metrics.
A characterization algorithm proposed herein can be viewed as the proof that one only needs to design the autocorrelation sequence such that (i) E′P(ω) has at least N+2 alternations, (ii) |H(ejω)|=√{square root over (P(ejω))} swings symmetrically around unity in the passband, i.e. its extremal values become 1+δP and 1−δP for some positive δP, and (iii) the maximum value δS of |H(ejω)|=√{square root over (P(ejω))} in the stopband satisfies the desired weight constraint, i.e.,
By the uniqueness of the globally minimax-optimal magnitude response from the characterization theorem provided above, when such an autocorrelation function is found, then |H(ejω)|=√{square root over (P(ejω))} will be the optimal solution. The actual filter coefficients can be obtained by spectral factorization of p[n] or any other technique known in the art for recovering the original function from its autocorrelation sequence, all of which are within the scope of the present disclosure. There will be more than one choice for the filter, all of which have the same magnitude response, including a minimum phase choice and a maximum phase choice among all others. The relationship between the extremal values of magnitude response |H(ejω)| and that of P(ejω)=|H(ejω)|2 are given in
Two steps of the design algorithm can be summarized as follows.
In the first step, an autocorrelation function p[n] is designed such that the Fourier transform of the autocorrelation function, P(ejω), satisfies the properties listed above. Such an autocorrelation function may be designed using any suitable method as known in the art. In some embodiments, such an autocorrelation function may be designed using the approach proposed in Section C below.
In the second step, a filter h[n] is determined such that its magnitude response |H(ejω)| satisfies |H(ejω)|=√{square root over (P(ejω))}. Any suitable methods as known in the art may be used to obtain such a filter, including but not limited to e.g. a spectral factorization of p[n] method.
The optimal autocorrelation function or, equivalently, its Fourier Transform such as the one shown in
One approach that may be used to obtain such an autocorrelation is to choose the target function to be the ideal filter response, which is unity in the passband and zero in the stopband, and choose the weight on stopband such that the frequency response of the obtained filter G(ejω) can be scaled and shifted to look like the Fourier transform of the autocorrelation shown in
With those constraints, a relationship between the weight K that needs to be used in designing the filter in
The mathematical details on the derivation of the relationship of equation (10) are provided below.
Equation (10) provides an implicit and non-linear equation that can be solved efficiently, for example using an iterative procedure. In various embodiments of using an iterative procedure for solving equation (10), the search space on K may be cut at each iteration in e.g. a binary search fashion or using Newton-Raphson method. Once the appropriate value of K is found (i.e. a value of K for which ΔP computed from the G(ejω) is equal to the right-hand side of equation (10)), the filter can be designed by shifting and scaling G(ejω) to obtain the frequency response of the autocorrelation P(ejω) in
Prior to the start of the method 300, the filter configuration unit 404 may be provided with a set of filter configuration parameters, shown in the system illustration of
The method 300 may begin with the filter configuration unit 404, at 302, initializing K (i.e., given ΩS, ΩP and Kdes, setting an initial guess for K). In some embodiments, K may advantageously be selected such that K≧4Kdes(Kdes+1), for a physically meaningful design.
At 304, the filter configuration unit 404 computes coefficients of the minimax-optimal even-symmetric or conjugate-symmetric (zero-phase in both cases) filter g[n] of order 2N. The coefficients are computed to approximate the target function
with the weight function
To that end, the filter configuration unit 404 may be configured to use the Remez Exchange algorithm, the Parks-McClellan algorithm, or any other method as known in the art.
In step 306, the filter configuration unit 404 may compute G(ejω), the frequency response of g[n], and the maximum value of passband error ΔP for this g[n], the latter being equivalent to the maximum value of the absolute weighted error |W(ω)(G(ejω)−D(ω))|.
In step 308, the filter configuration unit 404 determines whether the value of the passband error ΔP computed in step 306 satisfies the equality in equation (10) (i.e. whether the value of the passband error computed in step 306 is equal to the value of the right-hand side of the equation (10)). If so, then the method 300 proceeds to step 312. Otherwise, as shown in
In step 312, the filter configuration unit 404 may compute the scale and shift coefficients a and b, e.g. using equations (24) and (25).
In step 314, the filter configuration unit 404 may compute the function p[n] from the function g[n] determined in step 304, using the scale and shift coefficients computed in step 312. In some embodiments, the function p[n] may be computed as p[n]=a·g[n]+b·δ[n], where δ[n] is the unit impulse function (not to be confused with passband or stopband ripples of |H(ejω)|, namely δP or δS). This p[n] is the autocorrelation sequence that was sought after.
In step 316, the filter configuration unit 404 may compute the coefficients of h[n] based on the autocorrelation sequence p[n] using any method, e.g. including but not limited to spectral factorization method or using the Discrete Hilbert Transformation relationship between the magnitude and phase of a minimum phase filter, also known as the Bayard-Bode relation.
There will be more than one filter for which the autocorrelation sequence is p[n]. Depending on what kind of phase is specified by the design (as provided or pre-stored in the filter configuration unit 404), e.g. minimum phase or maximum phase, the filter configuration unit 404 may be configure to appropriately choose one or more filters, particularly if spectral factorization is used. Otherwise, if the Bayard-Bode relation is used, the resulting filter will be a minimum phase filter. However, as one skilled in the art will recognize, other phase characteristics can be obtained from a minimum phase filter through replacing “zeros” of the filter with their conjugate reciprocals as desired. Once the locations of their zeros are determined, the coefficients of filters with other phase characteristics can be computed using the relationship between a polynomial and its roots, as known in the art.
Derivation of Equation (10)
This section presents the derivation of the equation (10) provided above.
The scaling coefficient a and the shifting coefficient b may be selected such than the midpoints of the passband and stopband range matches to that of the autocorrelation in
a·1+b=1+δP2 (13)
and
which yields
and
The relative weight between passband and stopband does not change after scaling the filter response to match that of the autocorrelation, therefore the weights may be identical in both:
Since δP/δS=Kdes, K may be written as
In order to match the upper bound of the filter response in the stopband to that of the autocorrelation after the scale and shift,
δS2=a·ΔS+b. (20)
Inserting the values of the scale and shift coefficients a and b from equations (15) and (16), and inserting
from equation (19) into the equation (20) results in the following:
Solving equation (21) for ΔS yields
and, since ΔP=KΔS, the following is obtained:
Finally, once the appropriate weight K that satisfies this equation is found, the scale and shift coefficients can be computed directly from the parameters of this filter. Using equation (19) and (16), the shifting coefficient b may be determined as:
From equations (24) and (20), the scaling coefficient a may be determined as:
System View of an Improved Receiver and Digital Filter
As also shown in
It should be noted that, in order to not clutter the drawing, receiver 400 illustrates signal processing components of a receiver and does not illustrate other components which are typically present in receivers. For example, a person of ordinary skill in the art would recognize that the receiver 400 may further include one or more antennas for receiving signals, an integrated circuit that can provide an analog front end for receiving signals and converting analog input signals to digital data samples of the analog input signal, various interface ports, etc. In an embodiment, an analog front end can be configured to communicate with the processor 406 to provide digital data samples, which the processor 406 would process to filter signals with frequency contributions of interest ΩP, while cancelling, reducing, or rendering below the noise threshold of the detection mechanism contributions to the received signals at frequencies ωεΩS other than those in the band of interest.
Teachings provided herein are applicable to digital filters configured to filter electromagnetic signals in various frequency ranges (e.g. in the radio range, in the optical range, etc.). Furthermore, these teachings are applicable to digital filtering of signals detected by receivers other than electromagnetic receivers, such as e.g. sonar receivers.
The processor 406 may be configured to communicatively couple to other system elements via one or more interconnects or buses. Such a processor may include any combination of hardware, software, or firmware providing programmable logic, including by way of non-limiting example a microprocessor, a digital signal processor (DSP), a field-programmable gate array (FPGA), a programmable logic array (PLA), an application specific integrated circuit (ASIC), or a virtual machine processor. The processor 406 may be communicatively coupled to the memory element 408, for example in a direct-memory access (DMA) configuration. Such a memory element may include any suitable volatile or non-volatile memory technology, including double data rate (DDR) random access memory (RAM), synchronous RAM (SRAM), dynamic RAM (DRAM), flash, read-only memory (ROM), optical media, virtual memory regions, magnetic or tape memory, or any other suitable technology. Any of the memory items discussed herein should be construed as being encompassed within the broad term “memory element.” The information being tracked or sent to the digital filter 402, the filter configuration unit 404, the processor 406, or the memory 408 could be provided in any database, register, control list, cache, or storage structure, all of which can be referenced at any suitable timeframe. Any such storage options may be included within the broad term “memory element” as used herein. Similarly, any of the potential processing elements, modules, and machines described herein should be construed as being encompassed within the broad term “processor.” Each of the elements shown in
In certain example implementations, mechanisms for assisting configuration of the digital filter 402 and filtering of the input signal 410 as outlined herein may be implemented by logic encoded in one or more tangible media, which may be inclusive of non-transitory media, e.g., embedded logic provided in an ASIC, in DSP instructions, software (potentially inclusive of object code and source code) to be executed by a processor, or other similar machine, etc. In some of these instances, memory elements, such as e.g. memory 408 shown in
Exemplary Data Processing System
As shown in
The memory elements 504 may include one or more physical memory devices such as, for example, local memory 508 and one or more bulk storage devices 510. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 500 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved from the bulk storage device 510 during execution.
Input/output (I/O) devices depicted as an input device 512 and an output device 514, optionally, can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in
A network adapter 516 may also, optionally, be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 500, and a data transmitter for transmitting data from the data processing system 500 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 500.
As pictured in
In a first set of Examples, Examples A, Example 1A provides a computer-implemented method for determining coefficients h[n], where n=0, 1, . . . , N, of a finite impulse response (FIR) digital filter of order N having a magnitude response configured to approximate a response of an ideal FIR filter configured to pass components of signals at frequencies w within a set of passband frequencies ΩP and to suppress components of signals at frequencies within a set of stopband frequencies ΩS so that a ratio of an error in the set of passband frequencies ΩP (i.e. an error δP for frequencies ωεΩP) and an error in the set of stopband frequencies ΩS (i.e. error δS for frequencies ωεΩS) is equal to or within a tolerance range of Kdes, the method including: initializing a value of a scalar K; determining coefficients g[n] of a minimax-optimal even-symmetric or conjugate-symmetric filter (even-symmetric if h[n] is restricted to real-valued coefficients, and conjugate-symmetric if h[n] is allowed to take on complex-valued coefficients) of order 2N to approximate a response of an FIR filter configured to pass components of signals at frequencies within the set of passband frequencies ΩP and to suppress components of signals at frequencies within the set of stopband frequencies ΩS so that a ratio of an error in the set of passband frequencies ΩP (i.e. an error ΔP for frequencies ωεΩP) and an error in the set of stopband frequencies ΩS (i.e. an error ΔS for frequencies ωεΩS) is equal to or within a tolerance range of K; determining a frequency response G(ejω) of the minimax-optimal even-symmetric or conjugate-symmetric filter with the determined coefficients g[n]; determining a maximum passband error ΔP from the determined frequency response G(ejω); determining whether the determined maximum passband error ΔP satisfies a predefined condition with respect to a comparison value based on values of the scalars K and Kdes; and when the determined maximum passband error ΔP satisfies the condition with respect to the comparison value, computing a set of values p[n] by scaling the determined coefficients g[n] by a first scaling value a and adding to the scaled coefficients a*g[n] a unit impulse δ[n] scaled by a second scaling value b, and determining the coefficients h[n] as a set of (N+1) values for which the set of values p[n] is an autocorrelation sequence.
Example 2A provides the method according to Example 1A, where determining the coefficients g[n] of the minimax-optimal even-symmetric or conjugate-symmetric filter includes determining the coefficients g[n] using a Remez Exchange algorithm or a Parks-McClellan algorithm.
Example 3A provides the method according to Examples 1A or 2A, where determining the frequency response G(ejω) of the minimax-optimal even-symmetric or conjugate-symmetric filter includes computing a Fourier transform of the determined coefficients g[n].
Example 4A provides the method according to any one of the preceding Examples A, where determining the maximum passband error ΔP includes determining a maximum absolute deviation of a value of G(ejω) from a value of 1 for frequencies within the set of passband frequencies ΩP.
Example 5A provides the method according to any one of the preceding Examples A, where the comparison value is a value indicative of (e.g. equal or proportional to)
Example 6A provides the method according to any one of Examples 1A-5A, where the condition with respect to the comparison value includes a difference between the determined maximum passband error ΔP and the comparison value being within a predefined margin with respect to 0.
Example 7A provides the method according to any one of Examples 1A-5A, where the condition with respect to the comparison value includes a ratio between the determined maximum passband error ΔP and the comparison value being within a predefined margin with respect to 1.
Example 8A provides the method according to any one of Examples 1A-5A, where the condition with respect to the comparison value includes the determined maximum passband error ΔP being equal to the comparison value.
Example 9A provides the method according to any one of the preceding Examples A, where the scaling coefficient a is equal to
Example 10A provides the method according to any one of the preceding Examples A, where the shifting coefficient b is equal to
Example 11A provides the method according to any one of the preceding Examples A, where determining the coefficients h[n] as the set of (N+1) values for which the set of values p[n] is the autocorrelation sequence includes performing spectral factorization on the set of values p[n] or using the Discrete Hilbert Transformation relationship between the magnitude response and phase response of a minimum phase filter also known as the Bayard-Bode relation.
Example 12A provides the method according to any one of the preceding Examples A, where determining the coefficients h[n] as the set of (N+1) values for which the set of values p[n] is the autocorrelation sequence includes determining two or more sets of the (N+1) values for which the set of values p[n] is the autocorrelation sequence, each set of the (N+1) values associated with a different phase characteristic.
Example 13A provides the method according to Example 12A, further including selecting a set of (N+1) values from the two or more sets of the (N+1) values for which the set of values p[n] is the autocorrelation sequence with a specified phase characteristic.
Example 14A provides the method according to Example 13A, where the specified phase characteristic includes one of a minimum phase or a maximum phase.
Example 15A provides the method according to any one of the preceding Examples A, where, when the determined maximum passband error ΔP does not satisfy the condition with respect to the comparison value, the method further includes: performing iterations of changing a value of the scalar K from that for which the determined maximum passband error ΔP did not satisfy the condition with respect to the comparison value, and repeating, for the changed value of the scalar K, determination of the coefficients g[n] of the minimax-optimal even-symmetric or conjugate-symmetric filter, determination of the frequency response G(ejω), determination of the maximum passband error ΔP, and determination of whether the determined maximum passband error ΔP satisfies the condition with respect to the comparison value, until the determined maximum passband error ΔP satisfies the condition with respect to the comparison value.
Methods by which K is incremented or decremented include, but not limited to binary search methods, or Newton-Raphson method, etc., all of which are within the scope of the present disclosure.
In a second set of Examples, Examples B, Example 1B provides a system for filtering a signal received by an electromagnetic receiver. The system includes a filter configuration unit and a digital filter. The filter configuration unit is for configuring a first filter of order N to be applied to the signal by determining coefficients h[n] of the first filter, where a target response of the first filter is specified by a target (desired) response function D(ω) and a variable Kdes indicative of a target ratio of an error in a set of passband frequencies ΩP and an error in a set of stopband frequencies ΩS of the first filter. The filter configuration unit is configured to determine the coefficients of the first filter by iterating steps of i) setting a variable K indicative of a ratio of an error in the set of passband frequencies ΩP and an error in the set of stopband frequencies ΩS of a second filter of order 2N to a new value, and ii) determining coefficients g[n] of the second filter with a target response specified by the target response function D(ω) and the variable K, until a maximum passband error ΔP of a frequency response G(ejω) of the second filter satisfies a condition with respect to a comparison value based on the variables K and Kdes. The filter configuration unit is further configured for determining the coefficients h[n] of the first filter based on the coefficients g[n] of the second filter. The digital filter is configured to then generate a filtered signal by applying the first filter with the computed coefficients h[n] to the signal.
Example 2B provides the system according to Example 1A, where the second filter is a minimax-optimal even-symmetric filter and the coefficients h[n] of the first filter include real values.
Example 3B provides the system according to Example 1A, where the second filter is a minimax-optimal conjugate-symmetric filter and the coefficients h[n] of the first filter include complex values.
Example 4B provides the system according to any one of the preceding Examples B, where determining the coefficients h[n] of the first filter based on the coefficients g[n] of the second filter includes computing a set of values p[n] by scaling the coefficients g[n] of the second filter by a scaling coefficient a and adding a unit impulse δ[n] scaled by a shifting coefficient b, and determining the coefficients h[n] of the first filter as a set of (N+1) values for which the computed set of values p[n] is an autocorrelation sequence.
Example 5B provides the system according to Example 4B, where the scaling coefficient a is equal to
Example 6B provides the system according to Examples 4B or 5B, where the shifting coefficient b is equal to
Example 7B provides the system according to any one of Examples 4B-6B, where determining the coefficients h[n] of the first filter as the set of (N+1) values for which the set of values p[n] is the autocorrelation sequence includes performing a spectral factorization on the set of values p[n] or applying a Bayard-Bode relation to a square root of a frequency response of the set of values p[n], namely |H(ejω)|=√{square root over (P(ejω))}.
Example 8B provides the system according to any one of Examples 4B-7B, where determining the coefficients h[n] as the set of (N+1) values for which the set of values p[n] is the autocorrelation sequence includes determining two or more sets of the (N+1) values for which the set of values p[n] is the autocorrelation sequence, each set of the (N+1) values associated with a different phase characteristic.
Example 9B provides the system according to Example 6B, further including selecting a set of (N+1) values from the two or more sets of the (N+1) values for which the set of values p[n] is the autocorrelation sequence with a specified phase characteristic.
Example 10B provides the system according to Example 9B, where the specified phase characteristic includes one of a minimum phase or a maximum phase.
Example 11B provides the system according to any one of the preceding Examples B, where determining the coefficients g[n] of the second filter includes using a Remez Exchange algorithm or a Parks-McClellan algorithm.
Example 12B provides the system according to any one of the preceding Examples B, where determining the maximum passband error ΔP includes determining a value indicative of a maximum absolute deviation of the frequency response G(ejω) of the second filter from a value of 1 for frequencies within the set of passband frequencies ΩP.
Example 13B provides the system according to any one of the preceding Examples B, where the comparison value is a value indicative of
Example 14B provides the system according to any one of Examples 1B-13B, where the condition with respect to the comparison value includes a difference between the maximum passband error ΔP and the comparison value being within a margin with respect to 0.
Example 15B provides the system according to any one of Examples 1B-13B, where the condition with respect to the comparison value includes a ratio between the maximum passband error ΔP and the comparison value being within a margin with respect to 1.
Example 16B provides the system according to any one of Examples 1B-13B, where the condition with respect to the comparison value includes the maximum passband error ΔP being equal to the comparison value.
Example 17B provides the system according to any one of the preceding Examples B, where the target response of the second filter is a response of the second filter for which a ratio of an error due to a difference between the response of the second filter and the target response function D(ω) in the set of passband frequencies ΩP and an error due to a difference between the response of the second filter and the target response function D(ω) in the set of stopband frequencies ΩS is equal to or within a tolerance range of the variable K.
In another Example B according to any one of the preceding Examples B, the target response of the first filter a response of the first filter for which a ratio of an error due to a difference between the response of the first filter and the target response function D(ω) in the set of passband frequencies ΩP and an error due to a difference between the response of the first filter and the target response function D(ω) in the set of stopband frequencies ΩS is equal to or within a tolerance range of the variable Kdes.
Example 18B provides the system according to any one of the preceding Examples B, where each of the first filter and the second filter is a finite impulse response (FIR) filter.
Example 19B provides a computer-implemented method for operating a digital filter. The method includes computing coefficients h[n] of a first filter of order N, where a target response of the first filter is specified by a target response function D(ω) and a variable Kdes indicative of a target ratio of an error in a set of passband frequencies ΩP and an error in a set of stopband frequencies ΩS of the first filter, by performing one or more iterations of i) setting a variable K indicative of a ratio of an error in the set of passband frequencies ΩP and an error in the set of stopband frequencies ΩS of a second filter of order 2N to a new value, and ii) determining coefficients g[n] of the second filter, where a target response of the second filter is specified by the target response function D(ω) and the variable K, where the iterations are performed until a maximum passband error ΔP of a frequency response G(ejω) of the second filter satisfies a condition with respect to a comparison value based on the variables K and Kdes. The method further includes determining the coefficients h[n] of the first filter based on the coefficients g[n] of the second filter; and configuring the digital filter to apply the first filter with the coefficients h[n] to a signal to generate a filtered signal.
Example 20B provides the method according to Example 19B, further including receiving a value of the order N, the target response function D(ω) and the variable Kdes via a user interface.
Further Examples provide the methods according to Examples 19B or 20B, the methods further including steps of operating the system according to any one of Examples 1B-18B.
Other Examples provides a system comprising means for implementing the method according to any one of the preceding Examples, a computer program configured to implement the method according to any one of the preceding Examples, one or more non-transitory tangible media encoding logic that include instructions for execution that, when executed by a processor, are operable to perform operations of the method according to any one of the preceding Examples, and a system including at least one memory element configured to store computer executable instructions, and at least one processor coupled to the at least one memory element and configured, when executing the instructions, to carry out the method according to any one of the preceding Examples.
Note that in this Specification, references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in “one embodiment”, “example embodiment”, “an embodiment”, “another embodiment”, “some embodiments”, “various embodiments”, “other embodiments”, “alternative embodiment”, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments.
In one example embodiment, parts or entire electrical circuits of the FIGURES may be implemented on a motherboard of an associated electronic device. The motherboard can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the motherboard can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of digital signal processors, microprocessors, supporting chipsets, etc.), memory elements, etc. can be suitably coupled to the motherboard based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the motherboard as plug-in cards, via cables, or integrated into the motherboard itself.
In another example embodiment, parts or entire electrical circuits of the FIGURES may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application specific hardware of electronic devices. Note that particular embodiments of the present disclosure may be readily included in a system on chip (SOC) package, either in part, or in whole. An SOC represents an IC that integrates components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio frequency functions: all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of separate ICs located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the amplification functionalities may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.
It is also imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., the number of processors and memory elements, logic operations, etc.) have only been offered for purposes of example and teaching only. Such information may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described with reference to particular processor and/or component arrangements. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more electrical components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the FIGURES may be combined in various possible configurations, all of which are clearly within the broad scope of this Specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of electrical elements. It should be appreciated that parts or entire electrical circuits of the FIGURES and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of parts or entire electrical circuits as potentially applied to a myriad of other architectures.
Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. Although the claims may be presented in single dependency format in the style used before the USPTO, it should be understood that any claim can depend on and be combined with any preceding claim of the same type unless that is clearly technically infeasible.
This application claims the benefit of and priority from U.S. Provisional Patent Application Ser. No. 62/330,084 filed 30 Apr. 2016 entitled “DESIGNING FIR FILTERS WITH GLOBALLY MINIMAX-OPTIMAL MAGNITUDE RESPONSE,” which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62330084 | Apr 2016 | US |