The invention related generally to the field of computer systems and more particularly to computational functions for graphics processor chips
Graphics processor chips traditionally employ various mathematical functions implemented in hardware for fast drawing and rendering speed. Some examples of these mathematical functions include reciprocal function (“RCP”), reciprocal square root function (“SQRT”), exponential function (“EXP”) and logarithmic function (“LOG”). These mathematical functions are implemented in prior art as separate circuitry blocks with different algorithms.
For example, in a three cycle RCP implementation in the prior art, a floating point number x may be represented as a concatenation of a most significant bits (“MSB”) portion x0 and a least significant bits (“LSB”) portion x1 where x1=x−x0. The main calculation for reciprocal of x is in the calculation of mantissa. Mantissa is typically calculated in a two term function: f(x)=a+b(x−x0) in the prior art, where a and b are data look up tables. In a typical example, where more than 21 bit precision is required for a graphics processor, there needs to be over 16,000 entries in each of the data look up tables a and b to achieve the required precision. This is based on a 14 bit x0 and data look up tables with 2.sup.14 entries each. The hardware implementation of such large data look up tables results in large gate counts proportional to the size of the data look up tables. Graphic processor chips may include hardware implementation of several mathematical functions. In prior art examples, each of these mathematical functions requires large gate count and is typically combined with other methods. It is common technique in the prior art to implement each of these mathematical functions with separate logic circuitry and separate large data look up tables. As high speed and mobile applications demand higher integration and lower power consumption, there are needs for an efficient algorithm to implement these various mathematical functions.
In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
It will be readily understood that the components of the present invention, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the invention, as represented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of certain examples of presently contemplated embodiments in accordance with the invention. The presently described embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout.
The invention has been developed in response to the present state of the art and, in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available apparatus and methods.
Embodiments in accordance with the present invention may be embodied as an apparatus, method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. In selected embodiments, a computer-readable medium may comprise any non-transitory medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The present invention is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions or code. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Computing device 100 includes one or more processor(s) 102, one or more memory device(s) 104, one or more interface(s) 106, one or more mass storage device(s) 108, one or more Input/Output (I/O) device(s) 110, and a display device 130 all of which are coupled to a bus 112. Processor(s) 102 include one or more processors or controllers that execute instructions stored in memory device(s) 104 and/or mass storage device(s) 108. Processor(s) 102 may also include various types of computer-readable media, such as cache memory.
Memory device(s) 104 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 114) and/or nonvolatile memory (e.g., read-only memory (ROM) 116). Memory device(s) 104 may also include rewritable ROM, such as Flash memory.
Mass storage device(s) 108 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in
I/O device(s) 110 include various devices that allow data and/or other information to be input to or retrieved from computing device 100. Example I/O device(s) 110 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
Display device 130 includes any type of device capable of displaying information to one or more users of computing device 100. Examples of display device 130 include a monitor, display terminal, video projection device, and the like.
Interface(s) 106 include various interfaces that allow computing device 100 to interact with other systems, devices, or computing environments. Example interface(s) 106 include any number of different network interfaces 120, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface 118 and peripheral device interface 122. The interface(s) 106 may also include one or more user interface elements 118. The interface(s) 106 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.
Bus 112 allows processor(s) 102, memory device(s) 104, interface(s) 106, mass storage device(s) 108, and I/O device(s) 110 to communicate with one another, as well as other devices or components coupled to bus 112. Bus 112 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 100, and are executed by processor(s) 102. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
The above-described method provides a unified method to compute the list of above-identified transcendental functions with one unified hardware pipeline in floating point values, such as for a vertex shader and pixel shader in a mobile graphics chip. This technique may be based on computing of the following: F(x)=1/x; F(x)=1/x̂(½); F(x)=2̂x and F(x)=LOG 2(x).
These functions are implemented with a unified hardware pipe that performs the following function: F(x)=a+b(x−x0)+c(x−x0)(x−x1) (hereinafter “the interpolation function”). The approximation may be done in 64, 128, or some other number segments, where x0 is the starting value of a segment and x1 is the ending value of a segment. X0 is the MSB (most significant bits) portion of x, (x−x0) is the LSB (least significant bits) of portion of x. The value of x is between x0 and x1 (x0<=x<x1). The values a, b and c are from three separate tables, such as tables embedded in hardware.
For EXP, a floating point to fixed point number conversion stage is positioned before the unified hardware pipe. For LOG, there is a fixed point to floating point number conversion after the unified hardware pipe. The hardware flow and function is the same for each of the four functions, except the table chosen for each function is different. An input opcode chooses the function. The low latency efficiency RCP (reciprocal) implementation based on this approach can be reduced to 3 cycles.
Referring to
When input argument x is close to 1.0, Log 2(x) is very small. Instead of approximation LOG 2(x) directly, it may be approximated as F(x)=Log 2(x−1)/(x−1). Accordingly, for LOG 2 output2 may be set equal to x−1. So LOG 2(x)=F(x)*output2, where output2 is equal to (x−1) and F(x) is an approximation of LOG 2(x−1)/(x−1) computed using tables and interpolation within the hardware pipeline as described herein. The values of x for which this modification is performed by be selected based on the floating point representation used. For example, in some embodiments, when x is in the range of [0.75, 1.5), F(x)=LOG 2(x−1)/(x−1), output2=(x−1). Otherwise, for LOG 2(x), F(x)=LOG 2(x), and output2=1.0f.
For DIV (e.g. y/x), using the relationships y/x=y*(1/x)=y*Rcp(x), there may be 1/x underflow issue, when |x|>2̂126, 1/x=0 in 32 bit floating point expression. Underflow at |x|>2
X
Y=2Y*Log 2(X) (1.1)
This is particularly useful since the 2T and y*Log 2(x) are much simpler to implement. However, where high precision is needed (e.g., a relative error of 16-ULP (unit in the last place) in OpenCL) this approach is problematic.
If Log 2(x) has a relative error E, which is typically in the range of ±2−24, when calculating t=y*Log 2(x), one obtains t′=y*(Log 2(x)(1+ε)=t+t*ε. The Final calculation will be 2t*2t*ε≈2t(1+t*ε*loge2.0). The relative error will therefore become 0.69314*t*ε. For single precision calculation, t may be in the range of (−126, 127) in order to keep 2t in the single precision range. Accordingly, 0.69314*t*ε may in the range of 88.029*ε. This means the relative error may be increased by up to a factor of 88 times ε.
In prior approaches to implementing Log 2( ), the logic outputs values {M1, M2}, where M1 and M2 have single precision. The final value of Log 2(x) is then calculated by computing M1*M2 with single precision, which results in a relative error of ±2−24. Even if no additional error is introduced in subsequent calculations, the final relative error will still be on the order of ±88.0*2−24, which is much larger than 16.0 ULP requirement.
Accordingly, as shown in
As shown in
The values of KH and KL may be input to a Dp2 stage 604 along with Y′. As outlined below, Y′ is a version of the input argument Y that may be modified to deal with problematic corner cases. In an alternative embodiment, Y′ is simply the same as the input argument Y. The Dp2 function calculates Y′*KH+Y′*KL={TH, TL}, where TH and TL are two floating point values (high 24 bit precision+low 24 bit precision).
Note that the Dp2 stages 602 and 604 may be the same hardware component that is used at both stages of the illustrated process. In some embodiments, the Dp2 function is implemented by a portion of four input dot product logic (Dp4).
In prior hardware implementations of Dp2 (x1*y1+x2*y2) shares much of the same logic as Fma (fused multiply add, i.e. a*b+c). Since Fma needs to keep all multiply bits in order to handle some bit cancellation cases, the middle result has at least 48 bit precision. However, conventional Dp2 rounds to a 24 bit mantissa (single float precision) and outputs one float value. Accordingly, the Dp2 stages 602, 604 may include added logic such that they output two floating point values, i.e. the full 48 bits of precision internal to the Dp2 logic prior to rounding.
The values TH and TL may be input as the input argument T to transcendental stage 606 implementing 2T that takes an extended precision input, i.e. the 48 bits of precision of TH, TL.
In the illustrated argument, TH and TL are modified prior to input to the transcendental stage 606. For example, float-to-Fix logic 608, which is programmed to separate TH into an integer part (TH_Int) and fractional part (TH_Frac). TL will only have a fractional part. Accordingly, the process may include summing TH_Frac and TL to obtain T_ALL_Frac. TH_Int and T_All_Frac may then be input to the transcendental stage 606, which calculates 2̂(TH_Int+T_All_Frac). The transcendental stage 606 may calculate this value using any approach for calculating 2̂T known in the art.
Referring to
TranscedentalForLog(X)={n,M1H,M1L,M2} (1.2)
The values of n, M1H, M1L, and M2 may be calculated by the transcendental function 700 as described below.
Positive X may be written in the float format of (1.3).
x=2k*(1+s), (1.3)
where k is an integer and 0≤s<1.0,
If one notes (1.4)
p=s≥0.5?(1+s)/2:(1+s), (1.4)
one can write (1.5).
x=2n*p, with 0.75≤p<1.5(1.5)
where n is determined according to (1.6).
n=s≥0.5?k+1:k(1.6)
In this case one may calculate M2=1−p and {M1H, M1L}=(Log 2(1−p))/(1−p). Log 2( ) in this embodiment may be implemented using the table for 0.75≤x<1.5.
The value for Log 2(x) be the be calculated according to (1.7).
Log 2(x)=n*1.0+M1H*M2+M1L*M2, (1.7)
which can be processed with a three input dot product function 702 (Dp3, x1*y1+x2*y2+x3*y3). In some embodiments, both Dp2 and Dp3 are the implemented using the same logical circuit design, and possibly the same physical circuit, as Dp4.
Since (1.2) only uses one log table using the approach described above, it saves area on a chip. This benefit is achieved at the expense of additional normal floating point expressions (1.3) to (1.5). However, (1.4) and (1.6) are very easy to calculate. Note that (1.2) has a four floating point value output. Accordingly, it is readily implemented using a vector 4 GPU.
In OpenCL (open computing language standard), there are some corner cases such as (1.8).
(−3)3=−27, (−3)2=9, (−3)0.33333333=Nan (1.8)
If one exactly follows (1.1), there will be errors since Log 2(−3)=Nan. Accordingly, there may be a need to pre-process the input arguments X and Y to handle the corner cases. In particular, as shown in
The values of X and Y may be processed according to stages 600-608 as described above. As shown in
For OpenCL calculation of pow(x, y)=xy, the process, from start to finish may include:
The foregoing described embodiments of the invention are provided as illustrations and descriptions. They are not intended to limit the invention to precise form described. In particular, it is contemplated that functional implementation of invention described herein may be implemented equivalently in hardware, software, firmware, and/or other available functional components or building blocks, and that networks may be wired, wireless, or a combination of wired and wireless. Other variations and embodiments are possible in light of above teachings, and it is thus intended that the scope of invention not be limited by this Detailed Description, but rather by Claims following.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative, and not restrictive. The scope of the invention is, therefore, indicated by the appended claims, rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application is a related to U.S. application Ser. No. 14/486,891 filed Sep. 15, 2014, and entitled “Systems and Methods for Computing Mathematical Functions”, which is hereby incorporated herein by reference.