Claims
- 1. A calibration method for optimizing the transfer function of analog-to-digital converter (ADC) employing a cascade of n stages to form a composite n-bit ADC transfer function, the ADC having an analog input and an n-bit digital output, each stage having an analog input, the method comprising:applying a signal having a symmetric or uniform probability density property to the ADC analog input; determining at least one error value for each stage resulting from application of said signal; using the at least one error value for each stage to compensate each of said n stages during ADC operation.
- 2. The method of claim 1, wherein the n stages include a first stage which determines the treat significant or coarse bit(s), a last stage which determines the least significant or finest resolution bit(s), and wherein any intermediate stages in between the first stage and the last stage, if any, determine the intermediate bit(s).
- 3. The method of claim 2, wherein the step at determining at least one error value for each stag. includes:examining statistics of bit transitions at each stage individually as a result of application of said signal.
- 4. The method of claim 3, wherein said examining statistics of bit transitions includes:computing bit transition probability density functions for each stage output.
- 5. The method of claim 4, wherein said examining statistics of bit transitions includes:computing the bit transition probability density functions for individual bits of each stage output and for logical combinations of said individual bits of each stage output to determine deviation from a desired ideal transfer function related to both gain and offset errors within and between the stages.
- 6. The method of claim 5, wherein said determining at least one error value for each stage includes using said deviation to determine said at least one error value.
- 7. The method of claim 1, wherein said step of using said at least one error values includes, for each stage:summing one of said at least one error values with said analog input to provide an error-compensated analog input to the stage.
- 8. The method of claim 1, wherein at least one of said n stages further has a reference analog input, and wherein said step of using said at least one error value for said at least one of said n stages includes:summing one of said at least one error with said reference analog input to provide an error-compensated reference analog input.
- 9. An analog-to-digital converter (ADC), comprising:a cascade of N-stages, wherein a first stage determines the most significant or coarse bit(s) for the ADC, and a last stage determines the least significant or finest resolution bit(s) for the ADC, the cascade of N-stages forming a composite n-bit ADC transfer function; an ADC analog input port; an ADC digital output port; the first stage having a stage analog input connected to the ADC analog input port, and producing a first stage digital output end a first stage digital output; a calibration circuit for optimizing the ADC transfer function in response to application to the ADC of a calibration signal having a symmetric or uniform probability density property to the ADC analog input, the calibration circuit for determining at least one error value for each stage resulting from application of said signal; an error compensation circuit coupled to the calibration circuit for compensating each stage in response to said at least one error value for each stage.
- 10. The ADC of claim 9, wherein each stage includes a summer circuit for summing the stage analog input with an analog error value for the stage.
- 11. The ADC of claim 10, wherein at least one of the N-stages has a stage analog reference signal, and each stage further includes a reference summer circuit for summing the stage analog reference signal with an analog reference error signal.
- 12. The ADC of claim 9, wherein the calibration circuit is adapted to determine a bit transition probability density function for each of said stage digital outputs and for logical combinations of said digital outputs to determine deviation from a desired ideal transfer function related to both gain and offset errors within and between the stages.
- 13. A calibration method for optimizing the transfer function of analog-to-digital converter (ADC) employing a cascade of n stages to form a composite n-bit. ADC transfer function, the n stages including a first stage which determines the most significant or coarse bit(s), and a last stage which determines the least significant or finest resolution bit(s), the ADC having an analog input and an n-bit digital output, each stage having an analog input, the method comprising:applying a signal having a symmetric or uniform probability density property to the ADC analog input; determining at least one error value for each stage resulting from application of said signal, by examining statistics of bit transitions at each stage to compute bit transition probability density functions for both individual stage outputs and for logical combinations of the stage outputs to determine deviation from a desired transfer function related to both gain and offset errors within and between the stages; using the at least one error value for each stage to compensate each of said n stages during ADC operation.
Government Interests
This invention was made with Government support under Contract No. N00019-98-C-0003 awarded by the Department of the Navy. The Government has certain rights in this invention.
US Referenced Citations (14)