Microprocessors and microcontrollers often need to compute various math functions, for example trigonometric functions such as sine and cosine. One common method is to use a Taylor series in which a math function is approximated by a polynomial. In general, when a polynomial Pn(x) is used to approximate a function f(x) there is some inherent error when the polynomial is truncated at xn, even for computation with infinite precision. Using processors with finite precision increases the error as low-order bits are lost during computation. The error can be reduced by increasing the number of bits used to represent each quantity. However, in general, increasing the number of bits increases gate count, increases power consumption, and may increase computation time. There is an ongoing need for increased precision in computation of math functions, particularly for low-gate-count ultra-low-power applications.
The following equations are Taylor series approximations of cos(x) and sin(x), where x is in radians.
In
Since the accuracy of a Taylor series is best when x=0, the accuracy of a Taylor series for cos(x) can be improved by shifting the origin of the series, depending on the value of x, and using the appropriate trigonometric function for the shifted origin. For example, cos(x)=sin (π/2−x), and for some ranges of x, the accuracy of a Taylor series for sin (π/2−x) may be better than the accuracy for a Taylor series for cos(x). In particular, the accuracy of a Taylor series for cos(x) can be improved by using the Taylor series for sin(π/2−x) when x is between π/4 and π/2. Likewise, the Taylor series accuracy for cos(x) can be improved by using the Taylor series for −sin(x−π/2) when x is between π/2 and π3π/4, and so forth. The table below details which trigonometric function (as implemented by a Taylor series approximation) has the least error for a range of x from x=0 to x=2π.
Note that it is not necessary to switch algorithms at exact multiples of π/4. The Taylor series may be sufficiently accurate to use sin(x) beyond x=pi/4 and the boundaries and ranges in the table are just examples of convenient boundaries.
In general, the value of x may be greater than 2 π. If x is greater than 2 π, then the operand may be brought within a range of zero to 2 π by using (x modulo 2 π) as the operand instead of x. Given an operand within the range of zero to 2 π, the proper choice of which trigonometric function to use may be determined by using the integer value of operand/(π/4) as an index for the above table.
In conventional algorithms for the modulo function, (a mod n) is computed as, for example, (a−n*int(a/n). Conventional algorithms for a modulo function may be inaccurate when the dividend is large because low-order bits are lost due to shifting and rounding. A more accurate modulo algorithm is provided below.
Assume that a processor needs to calculate (x mod 2 π):
1. Let quotient=int(x/(2 π))
2. Divide the digits of π into multiple parts as follows:
3. Let op1=3.141592
4. Let op2=6.53589710−7
5. Let op3=9.323846*10−14
6. Compute (x mod 2 π)=((x/2−quotient*op1)−quotient*op2))−quotient*op3
That is, the improved modulo function subtracts in multiple steps that retain the lower order bits of the remainder. In particular, each of op1, op2, op3 may be single-precision numbers and the improved modulo function enables a calculation with single-precision that is more accurate than a conventional algorithm using double-precision.
The above discussion uses computation of cos(x) as an example, but the principle of using an alternate trigonometric function that is more accurate for a range of x is equally applicable to sin(x). For example, when x is between π/4 and π/2, a processor may compute cos(π/2−x) instead of sin(x).
In summary, by using alternative trigonometric functions depending on the value of the operand, and by using the improved modulo function, the methods described above can achieve better accuracy using single-precision computation than conventional methods using double-precision computation. As a result, a processor can have lower complexity and lower energy and computation of trigonometric functions may be faster.