Equalizers are an important element in many diverse digital information applications, such as voice, data, and video communications. These applications employ a variety of transmission media. Although the various media have differing transmission characteristics, none of them is perfect. That is, every medium induces variation into the transmitted signal, such as frequency-dependent phase and amplitude distortion, multi-path reception, other kinds of ghosting, such as voice echoes, and Rayleigh fading. In addition to channel distortion, virtually every sort of transmission also suffers from noise, such as additive white gausian noise (“AWGN”). Equalizers are therefore used as acoustic echo cancelers (for example in full-duplex speakerphones), video deghosters (for example in digital television or digital cable transmissions), signal conditioners for wireless modems and telephony, and other such applications.
One important source of error is intersymbol interference (“ISI”). ISI occurs when pulsed information, such as an amplitude modulated digital transmission, is transmitted over an analog channel, such as, for example, a phone line or an aerial broadcast. The original signal begins as a reasonable approximation of a discrete time sequence, but the received signal is a continuous time signal. The shape of the impulse train is smeared or spread by the transmission into a differentiable signal whose peaks relate to the amplitudes of the original pulses. This signal is read by digital hardware, which periodically samples the received signal.
Each pulse produces a signal that typically approximates a sinc wave. Those skilled in the art will appreciate that a sinc wave is characterized by a series of peaks centered about a central peak, with the amplitude of the peaks monotonically decreasing as the distance from the central peak increases. Similarly, the sinc wave has a series of troughs having a monotonically decreasing amplitude with increasing distance from the central peak. Typically, the period of these peaks is on the order of the sampling rate of the receiving hardware. Therefore, the amplitude at one sampling point in the signal is affected not only by the amplitude of a pulse corresponding to that point in the transmitted signal, but by contributions from pulses corresponding to other bits in the transmission stream. In other words, the portion of a signal created to correspond to one symbol in the transmission stream tends to make unwanted contributions to the portion of the received signal corresponding to other symbols in the transmission stream.
This effect can theoretically be eliminated by proper shaping of the pulses, for example by generating pulses that have zero values at regular intervals corresponding to the sampling rate. However, this pulse shaping will be defeated by the channel distortion, which will smear or spread the pulses during transmission. Consequently, another means of error control is necessary. Most digital applications therefore employ equalization in order to filter out ISI and channel distortion.
Generally, two types of equalization are employed to achieve this goal: automatic synthesis and adaptation. In automatic synthesis methods, the equalizer typically compares a received time-domain reference signal to a stored copy of the undistorted training signal. By comparing the two, a time-domain error signal is determined that may be used to calculate the coefficient of an inverse function (filter). The formulation of this inverse function may be accomplished strictly in the time domain, as is done in Zero Forcing Equalization (“ZFE”) and Least Mean Square (“LMS”) systems. Other methods involve conversion of the received training signal to a spectral representation. A spectral inverse response can then be calculated to compensate for the channel distortion. This inverse spectrum is then converted back to a time-domain representation so that filter tap weights can be extracted.
In adaptive equalization the equalizer attempts to minimize an error signal based on the difference between the output of the equalizer and the estimate of the transmitted signal, which is generated by a “decision device.” In other words, the equalizer filter outputs a sample, and the decision device determines what value was most likely transmitted. The adaptation logic attempts to keep the difference between the two small. The main idea is that the receiver takes advantage of the knowledge of the discrete levels possible in the transmitted pulses. When the decision device quantizes the equalizer output, it is essentially discarding received noise. A crucial distinction between adaptive and automatic synthesis equalization is that adaptive equalization does not require a training signal.
Error control coding generally falls into one of two major categories: convolutional coding and block coding (such as Reed-Solomon and Golay coding). At least one purpose of equalization is to permit the generation of a mathematical “filter” that is the inverse function of the channel distortion, so that the received signal can be converted back to something more closely approximating the transmitted signal. By encoding the data into additional symbols, additional information can be included in the transmitted signal that the decoder can use to improve the accuracy of the interpretation of the received signal. Of course, this additional accuracy is achieved either at the cost of the additional bandwidth necessary to transmit the additional characters, or of the additional energy necessary to transmit at a higher frequency.
A convolutional encoder comprises a K-stage shift register into which data is clocked. The value K is called the “constraint length” of the code. The shift register is tapped at various points according to the code polynomials chosen. Several tap sets are chosen according to the code rate. The code rate is expressed as a fraction. For example, a ½ rate convolutional encoder produces an output having exactly twice as many symbols as the input. Typically, the set of tapped data is summed modulo-2 (i.e., the XOR operation is applied) to create one of the encoded output symbols. For example, a simple K=3, ½ rate convolutional encoder might form one bit of the output by modulo-2-summing the first and third bits in the 3-stage shift register, and form another bit by modulo-2-summing all three bits.
A convolutional decoder typically works by generating hypotheses about the originally transmitted data, running those hypotheses through a copy of the appropriate convolutional encoder, and comparing the encoded results with the encoded signal (including noise) that was received. The decoder generates a “metric” for each hypothesis it considers. The “metric” is a numerical value corresponding to the degree of confidence the decoder has in the corresponding hypothesis. A decoder can be either serial or parallel—that is, it can pursue either one hypothesis at a time, or several.
One important advantage of convolutional encoding over block encoding is that convolutional decoders can easily use “soft decision” information. “Soft decision” information essentially means producing output that retains information about the metrics, rather than simply selecting one hypothesis as the “correct” answer. For an overly-simplistic example, if a single symbol is determined by the decoder to have an 80% likelihood of having been a “1” in the transmission signal, and only a 20% chance of having been a “0”, a “hard decision” would simply return a value of 1 for that symbol. However, a “soft decision” would return a value of 0.8, or perhaps some other value corresponding to that distribution of probabilities, in order to permit other hardware downstream to make further decisions based on that degree of confidence.
Block coding, on the other hand, has a greater ability to handle larger data blocks, and a greater ability to handle burst errors.
The following is a description of an improvement upon a combined trellis decoder and decision feedback equalizer, as described in U.S. patent application Ser. No. 09/876,547, filed Jun. 7, 2001, which is hereby incorporated herein in its entirety.
Additional background information is contained in the concurrently-filed U.S. utility patent application entitled, “Synchronization Symbol Re-Insertion for a Decision Feedback Equalizer Combined with a Trellis Decoder,” which is also hereby incorporated herein in its entirety.
In a first embodiment, the present invention provides a DFE comprising a series of cascaded fast-feedback pipes. Each fast-feedback pipe comprises: a filter input; a control input; a data input; a multiplexed tap coefficient input; a reuse clock, a multiplier, a multiplexer, a series of data registers, a final data register, and an adder. The reuse clock has a reuse clock frequency that is greater than the symbol clock frequency, and determines the clock period for all other components in the reuse pipe. The multiplier has as input the data input and the multiplexed tap coefficient input, and has as output a multiplier output. The multiplexer has as input the filter input, the control input, and an adder output. The multiplexer also has a multiplexer output. The multiplexer is configured to pass the filter input to the multiplexer output when the control input is in a first state, and to pass the adder output to the multiplexer output when the control input is in a second state. The series of data registers has as input the multiplexer output, and has as output a delay line output. Each of the series of data registers has a single reuse clock period delay. The adder has as inputs the delay line output and the multiplier output, and has as output the adder output. The final data register has as input the adder output and the control input, and also has a final output. The final data register is configured to latch the adder output only when the control input is in the first state. The multiplexed tap coefficient input inputs tap coefficients. Each of the reuse pipes receives a common control input and a common data input, and each of the reuse pipes after a first reuse pipe has as its filter input the final output from a prior reuse pipe.
In a second embodiment, the present invention provides an equalizer filter having a plurality of taps, each tap comprising a multiplier and an adder, and wherein a common input data symbol is simultaneously multiplied by a majority of the plurality of taps' multipliers.
In a third embodiment, the present invention provides a decision feedback equalizer combined with a trellis decoder having only a transposed filter structure.
In a fourth embodiment, the present invention provides a fast-feedback reuse pipe.
In a fifth embodiment, the present invention provides a DFE for interpreting a digital television signal. The fifth embodiment DFE comprises a trellis decoder and a plurality of sub-filter pipelines. The trellis decoder has a plurality of stages and decoding banks. Each of the plurality of sub-filter pipelines is fed intermediate decoded symbols of one of the stages in a trace-back chain of a current decoding bank. The DFE output is formed by summing the plurality of sub-filter pipelines.
Although the characteristic features of this invention will be particularly pointed out in the claims, the invention itself, and the manner in which it may be made and used, may be better understood by referring to the following descriptions taken in connection with the accompanying figures forming a part hereof.
For the purposes of promoting an understanding of the principles of the invention, reference will now be made to the preferred embodiment and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Such alternations and further modifications in the invention, and such further applications of the principles of the invention as described herein as would normally occur to one skilled in the art to which the invention pertains, are contemplated, and desired to be protected.
The present invention provides a transposed structure for a decision feedback equalizer (“DFE”) that can be used, for example, with a combined DFE and trellis decoder, as taught by U.S. patent application Ser. No. 09/884,256 (which is hereby incorporated in its entirety). The transposed structure permits extremely fast and effective ghost cancellation, so that the equalizer provides a high quality signal resolution in even during severe noise and channel distortion. Consequently, a digital receiver, such as are used in a digital television or cell phone, will have clear reception under conditions where prior digital equipment would completely fail.
The transposed structure of the present invention provides a feedback ghost estimation in only a single symbol clock cycle. In addition to the obvious advantage of a fast ghost estimation, this also provides a ghost estimation that is independent of the number of taps in the transposed pipeline. Prior art transposed filter structures suffer more severe time problems as the length of the equalizer increases.
The transposed structure of the present invention also permits the use of a fast-feedback reuse circuit, described in detail hereinbelow. The fast-feedback reuse circuit provides a feedback loop that can complete ghost cancellation for the next symbol in only a single symbol clock period. Furthermore, it can complete the final feedback calculation in only a fraction of a symbol clock cycle using a separate, higher frequency clock, termed a “reuse clock.” The fast-feedback reuse circuit also permits logic sharing, whereby the number of logical components necessary to complete ghost cancellation calculations can be substantially reduced, with a corresponding reduction in the cost of the hardware.
It will be appreciated that a DFE can have N×D+M taps, where N is the number of inner decoding stages, D is the number of banks in the trellis decoder, and M is the number of taps after the final decoded symbol of the trellis decoder. When the DFE is implemented in a traditional transverse structure, all N×D+M decoded symbols are fed into the DFE. The latter portion of such a DFE, consisting of the final M taps, takes in a decoded symbol from an accurate delay line (that is, a delay line that provides the same output in a given clock cycle as the input during a previous clock cycle). Thus, the latter portion of such a DFE is not time-critical. Consequently, it can be implemented in either a traditional transverse structure or a transposed pipeline, as disclosed herein, without difficulty.
However, the first portion of such a DFE, consisting of the first N×D taps, takes as input the intermediate decoded symbols from the trellis decoder. Consequently, it is time-critical. The transposed DFE structure shown in
It will be appreciated that both feed-forward equalizers (“FFE”) and DFEs include a filter. Typically, this filter is implemented in a transverse structure as shown in
where x(n) is the input symbol at symbol clock period n, and
where ck(n) is the coefficient of the kth tap in the nth symbol clock period.
It will be appreciated that, when given the same inputs, the output of the (K+1)-tap transposed filter 150 is given by:
When the coefficients of the filter are fixed, y2(n) is equal to y1(n) and the transposed filter is identically equivalent to the transverse filter. On the other hand, when the coefficients vary over time, y2(n) is not necessarily equal to y1(n), and, therefore, the transposed filter is not precisely equivalent to the transverse filter. However, because the tap coefficients in the equalizer change gradually, and slowly on a symbol-by-symbol basis, if the total tap number K is small, the increase and decrease of the taps is very small within a K-symbol neighborhood, and can be ignored. In this case, a given value of the Kth tap is approximately equal to its value K symbol clock cycles before:
ck(n)≈ck(n−k), (Eq. 3)
where k=1, 2, . . . , K.
Thus, for a small number of taps, during adaptation operation of the equalizer when the transposed and transverse structures are functionally equivalent, the transposed structure can be employed without practical degradation of the equalizer's performance.
When the DFE 210 is combined with a trellis decoder 220, all N×D inner intermediate decoded symbols held in the trace-back chains in the trellis decoder 220 must be re-arranged into an equivalent N×D symbol delay line in the order they went into the trellis decoder as input, un-decoded symbol samples. This temporal order recovered sequence (the equivalent N×D symbol delay line) can be fed into the DFE 210 to produce the desired ghost estimate. This is illustrated in
As shown in
T(i, j) denotes the cell containing the intermediate decoded symbol stored in the jth stage of the trace-back chain, in relative bank #i, 1≦i≦D, 1≦j≦N. The symbol T(i, j) incurs (j−1)D+(i−1) symbol delays after the cursor symbol that has the same time stamp as the current input symbol to the trellis decoder. The data stored in the same stage of the trace-back chains (D symbols per stage) of all banks composes a continuous delay line. It will be appreciated that in the trellis decoder only the data held in the trace-back chain of the current decoding bank may change their values during the trace-back process. In other words, the data in all cells are not modified when they are moved from the 1st relative bank (i.e. the current bank) to the last relative bank (i.e. the Dth bank, or the next bank), and, therefore, each sequence of D symbols composes an accurate delay line, wherein the data is continuously delayed without modification. All together there are N such sequences in an N-stage trellis decoder. On the other hand, the data stored in different stages do not compose an accurate delay line, because these data can change their values. Thus, the N×D inner intermediate decoded symbols are divided into N accurate delay lines, each covering D taps, and made up of the inner intermediate decoded symbols of the same stage of the respective trace-back chains in all banks. In each accurate delay line the data from the 1st relative bank is just the desired data symbol of the 1st tap, and so on to the last relative bank, its value is the data symbol of the Dth tap. This can be described by the syntax:
As stated above, each filter that is fed by an accurate delay line can be implemented by a transposed structure. The 1st part of the DFE 210 combined with the trellis decoder 212 (consisting of the first N×D inner intermediate decoded symbols), can be implemented in N transposed pipelines, each covering D taps and taking as input one of the N inner intermediate decoded symbols stored in the trace-back chain of the current decoding bank. The outputs from all transposed pipelines are summed together to give out ghost estimation coming from the first part of the DFE 210 combined with the trellis decoder, as shown in
It will be appreciated that the syntax recited above does not cover the “corner cases” that develop when non-data symbols are read in, including, for example, the symbols of the segment and field sync signals in a digital television signal. At least one means of handling such corner cases is taught in the concurrently filed U.S. patent application entitled “Synchronization Symbol Re-insertion for a Decision Feedback Equalizer Combined with a Trellis Decoder,” which is hereby incorporated in its entirety.
In the case of an ATSC receiver, there are 12 trellis banks, and typically there are 16 decoding stages. A DFE according to the present invention fitting to 16 decoding stages includes 16 transposed pipelines. The data symbol in cell T(1, 1) (as illustrated in
It will be appreciated that, in such a 16×12+M DFE structure, the number of taps in each transposed pipeline, K, is 12 in Equation 3 above. Consequently, the approximation of Equation 3 works extremely well, because the tap coefficients change very little over 12 symbol clock cycles (if they change at all). Consequently, there is little or no loss of performance caused by the approximation in the transposed pipeline calculation.
As discussed above, the timing-critical part of each transposed pipeline of the DFE of
Each pipe covers a group of consecutive taps, denoted by L in
In the preferred embodiment, the transposed filter is implemented as a fast-feedback reuse pipe structure, such as the one shown in
The multiplexer C in each reuse pipe switches that pipe's input between the output of the preceding reuse pipe and the output of the adder B in the current reuse pipe. For example, in pipe No. 0, the multiplexer C switches between S1(n), the output of pipe No. 1, and the sum from adder B, as described in further detail hereinbelow. The output of the multiplexer C is delayed by data register RL−1; data register RL−1's output is delayed by data register RL−2, and so on through data register R1. Within each reuse pipe, the L tap coefficients and the L data symbols are input consecutive to the multiplier A by a time-domain multiplexing in L reuse clock cycles. The output product of each pair of tap coefficient and data symbol from the multiplier A is added to the data register R1 by adder B, and the resulting sum is latched into data register R0 in the first reuse clock cycle of a given symbol clock cycle, and into RL−1 in other reuse clock cycles. Since the output of pipe No. 0 is updated, it becomes ready to be summed. All together the N values held by all R0 registers in N transposed pipelines are summed together and the feedback from the DFE 210 is thereby produced. During every symbol clock cycle, the multiplier A creates L delta values, the L data values held by L data registers are updated once by absorbing the delta values when they pass through the adder B one by one.
Because the reuse pipe performs L operations reusing the same multiplier B and adder A as is performed by the L multipliers and adders in the pipes of the DFE shown in
In the preferred embodiment, the multiplier A and adder B have no reuse clock delay, so the data register R0 is updated in the 1st reuse clock cycle in a symbol clock period. During this reuse clock cycle, and only this reuse clock cycle, the multiplexer C control signal (shown as U in
It will be appreciated that when each symbol clock cycle (or L reuse clock cycles) is over all data registers keep the updated data values of the L taps in each pipe, as required.
In certain alternative embodiments, the multipliers A and adders B have some reuse clock cycle delays. This causes the data register R0 to be updated in a later reuse clock cycle. In this way, a clock with a frequency greater than L times the symbol clock may be used, and the computation performed by the reuse pipes can be slowed down to reach the desired frequency.
Due to its unique structure, the transposed filter structure 400 completes its first calculation in only a single reuse clock cycle, which will make ready at the same time the feedback of each filter for the required following computations outside the DFE. Meanwhile the transposed DFE is able to use the whole symbol clock cycle, or L reuse clock cycles, to finish all computations required to update all the internal data registers.
Furthermore, the transposed filter, structure 400 permits a reduction in cost of hardware, because of the reduction by a factor of L in the number of adders and multipliers in the filter. Of course, this cost advantage is offset somewhat by the requirement to use higher frequency components, but even with L as low as 4 substantially savings are possible. It will be appreciated that L can advantageously be as high as 12, or even 16.
While the invention has been illustrated and described in detail in the drawings and foregoing description, the description is to be considered as illustrative and not restrictive in character. Only the preferred embodiments, and such alternative embodiments deemed helpful in further illuminating the preferred embodiment, have been shown and described. It will be appreciated that changes and modifications to the forgoing can be made without departing from the scope of the following claims.
This application claims priority from U.S. Provisional Patent Applications Nos. 60/370,380 filed Apr. 5, 2002 and 60/370,413 filed Apr. 5, 2002.
Number | Name | Date | Kind |
---|---|---|---|
4567599 | Mizoguchi | Jan 1986 | A |
4712221 | Pearce et al. | Dec 1987 | A |
4815103 | Cupo et al. | Mar 1989 | A |
4833693 | Eyuboglu | May 1989 | A |
4856031 | Goldstein | Aug 1989 | A |
4866395 | Hostetter | Sep 1989 | A |
4989090 | Campbell et al. | Jan 1991 | A |
5052000 | Wang et al. | Sep 1991 | A |
5056117 | Gitlin et al. | Oct 1991 | A |
5058047 | Chung | Oct 1991 | A |
5127051 | Chan et al. | Jun 1992 | A |
5134480 | Wang et al. | Jul 1992 | A |
5142551 | Borth et al. | Aug 1992 | A |
5210774 | Abbiate et al. | May 1993 | A |
5278780 | Eguchi | Jan 1994 | A |
5311546 | Paik et al. | May 1994 | A |
5453797 | Nicolas et al. | Sep 1995 | A |
5471508 | Koslov | Nov 1995 | A |
5506636 | Patel et al. | Apr 1996 | A |
5508752 | Kim et al. | Apr 1996 | A |
5532750 | De Haan et al. | Jul 1996 | A |
5537435 | Carney et al. | Jul 1996 | A |
5568098 | Horie et al. | Oct 1996 | A |
5568521 | Williams et al. | Oct 1996 | A |
5588025 | Strolle et al. | Dec 1996 | A |
5619154 | Strolle et al. | Apr 1997 | A |
5648987 | Yang et al. | Jul 1997 | A |
5668831 | Claydon et al. | Sep 1997 | A |
5692014 | Basham et al. | Nov 1997 | A |
5757855 | Strolle et al. | May 1998 | A |
5781460 | Nguyen et al. | Jul 1998 | A |
5789988 | Sasaki | Aug 1998 | A |
5802461 | Gatherer | Sep 1998 | A |
5805242 | Strolle et al. | Sep 1998 | A |
5828705 | Kroeger et al. | Oct 1998 | A |
5835532 | Strolle et al. | Nov 1998 | A |
5862156 | Huszar et al. | Jan 1999 | A |
5870433 | Huber et al. | Feb 1999 | A |
5872817 | Wei | Feb 1999 | A |
5877816 | Kim | Mar 1999 | A |
5894334 | Strolle et al. | Apr 1999 | A |
5995154 | Heimburger | Nov 1999 | A |
6005640 | Strolle et al. | Dec 1999 | A |
6034734 | De Haan et al. | Mar 2000 | A |
6034998 | Takashi et al. | Mar 2000 | A |
6044083 | Citta et al. | Mar 2000 | A |
6069524 | Mycynek et al. | May 2000 | A |
6133785 | Bourdeau | Oct 2000 | A |
6133964 | Han | Oct 2000 | A |
6141384 | Wittig et al. | Oct 2000 | A |
6145114 | Crozier et al. | Nov 2000 | A |
6154487 | Murai et al. | Nov 2000 | A |
6178209 | Hulyalkar et al. | Jan 2001 | B1 |
6195400 | Maeda | Feb 2001 | B1 |
6198777 | Feher | Mar 2001 | B1 |
6219379 | Ghosh | Apr 2001 | B1 |
6222891 | Liu et al. | Apr 2001 | B1 |
6226323 | Tan et al. | May 2001 | B1 |
6233286 | Wei | May 2001 | B1 |
6240133 | Sommer et al. | May 2001 | B1 |
6249544 | Azazzi et al. | Jun 2001 | B1 |
6260053 | Maulik et al. | Jul 2001 | B1 |
6272173 | Hatamian | Aug 2001 | B1 |
6275554 | Bouillet et al. | Aug 2001 | B1 |
6278736 | De Haan et al. | Aug 2001 | B1 |
6304614 | Abbaszadeh et al. | Oct 2001 | B1 |
6307901 | Yu et al. | Oct 2001 | B1 |
6333767 | Patel et al. | Dec 2001 | B1 |
6356586 | Krishnamoorthy et al. | Mar 2002 | B1 |
6363124 | Cochran | Mar 2002 | B1 |
6411341 | De Haan et al. | Jun 2002 | B1 |
6411659 | Liu et al. | Jun 2002 | B1 |
6415002 | Edwards et al. | Jul 2002 | B1 |
6421378 | Fukuoka et al. | Jul 2002 | B1 |
6430287 | Rao | Aug 2002 | B1 |
6438164 | Tan et al. | Aug 2002 | B2 |
6452639 | Wagner et al. | Sep 2002 | B1 |
6466615 | Song | Oct 2002 | B1 |
6466630 | Jensen | Oct 2002 | B1 |
6483872 | Nguyen | Nov 2002 | B2 |
6490007 | Bouillet et al. | Dec 2002 | B1 |
6493409 | Lin et al. | Dec 2002 | B1 |
6507626 | Limberg | Jan 2003 | B1 |
6535553 | Limberg et al. | Mar 2003 | B1 |
6553398 | Capofreddi | Apr 2003 | B2 |
6570919 | Lee | May 2003 | B1 |
6573948 | Limberg | Jun 2003 | B1 |
6611555 | Smith et al. | Aug 2003 | B2 |
6665695 | Brokish et al. | Dec 2003 | B1 |
6724844 | Ghosh | Apr 2004 | B1 |
6734920 | Ghosh et al. | May 2004 | B2 |
6775334 | Liu et al. | Aug 2004 | B1 |
6798832 | Nakata et al. | Sep 2004 | B1 |
6829298 | Abe et al. | Dec 2004 | B1 |
20010048723 | Oh | Dec 2001 | A1 |
20020024996 | Agazzi et al. | Feb 2002 | A1 |
20020051498 | Thomas et al. | May 2002 | A1 |
20020136329 | Liu et al. | Sep 2002 | A1 |
20020154248 | Wittig et al. | Oct 2002 | A1 |
20020172276 | Tan et al. | Nov 2002 | A1 |
20020175575 | Birru | Nov 2002 | A1 |
20020186762 | Xia et al. | Dec 2002 | A1 |
20020191716 | Xia et al. | Dec 2002 | A1 |
20030058967 | Lin et al. | Mar 2003 | A1 |
20030206600 | Vankka | Nov 2003 | A1 |
20040057538 | Sathiavageeswaran et al. | Mar 2004 | A1 |
Number | Date | Country |
---|---|---|
0 524 559 | May 1997 | EP |
0 752 185 | Jul 2002 | EP |
WO 0027033 | May 2000 | WO |
WO 0027066 | May 2000 | WO |
WO 0101650 | Jan 2001 | WO |
WO 0113516 | Feb 2001 | WO |
WO 0143310 | Jun 2001 | WO |
WO 0143384 | Jun 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20040013191 A1 | Jan 2004 | US |
Number | Date | Country | |
---|---|---|---|
60370380 | Apr 2002 | US | |
60370413 | Apr 2002 | US |