The present disclosure relates generally to methods and apparatus for data representation to improve the computational efficiency of artificial intelligence (AI) accelerators.
Deep neural networks, such as convolutional neural networks (CNN), can be used for image and video recognition and classification and other artificial intelligence (AI) applications such as recommender engines, natural language processing and medical image analysis. The neural networks used for these applications have grown in computational complexity, requiring increased power consumption for training and inference. In particular, running neural networks on mobile or embedded platforms is challenging due to their hardware and power constraints. Edge devices (devices that allow local devices or networks, which interface with consumer or commercial products (e.g., robots, drones, surveillance equipment, Augmented Reality (AR) products, Virtual Reality (VR) products, self-driving vehicles, smartphones, wearable devices and the like) to connect to the edge of the Internet have limitations imposed by their size and available power. As such, there is a need for solutions to enable more efficient operation of neural networks on such edge devices. Some of these efforts are directed to hardware design efficiency. Other efforts are directed to increasing the efficiency of machine learning models. However, given the increasing computational complexity of neural networks, improved efficiency in hardware design and modelling alone may not provide sufficient solutions.
As such, there is an increasing interest in the use of methods of data representation to improve the computational efficiency in the operations run by neural networks. For example, where it is practical to accept some loss in precision in exchange for a gain in efficiency, low-precision arithmetic and/or compression can be used. However, some low-precision computational methods yield insignificant improvements in computational efficiency and/or poor or even invalid results in training and inference. There is a need for methods and apparatus of data representation that can be used for improving computational efficiency, including for example the inner product computations used in convolutional neural networks, while still achieving acceptable outputs.
In general, the present specification describes methods and apparatus incorporating the use of data representation based on multi-dimensional logarithmic number systems for hardware acceleration of inner product computations in neural networks such as convolutional neural networks (CNNs).
One aspect of the invention provides a method for implementing training and inference of deep neural networks. The method includes: receiving a set of training data; representing the set of training data in a multidimensional logarithmic number system (MDLNS), the MDLNS representation using a first exponent associated with a first base and a second exponent associated with a second base; conducting deep neural network training on the set of training data, using a predetermined first base and a predetermined second base, to determine a set of neural network weight coefficients; based on the determined set of neural network weight coefficients and for the predetermined first base, optimizing the second base for multi-dimensional logarithmic data representation; and conducting deep neural network inference on a set of network inputs to obtain a set of network outputs, using the optimized multi-dimensional logarithmic data representation.
In some embodiments, optimizing the second base for multi-dimensional logarithmic data representation comprises determining an optimal second base for which the mean square error (MSE) is minimized. A mixed-integer global optimization procedure may be implemented to optimize the second base and the possible range of the second exponents associated therewith.
In some embodiments, the predetermined first base is 2. In some embodiments, the predetermined second base is 2ω, where ω=(1+sqrt(5))/2. The MDLNS may, optionally, use one or more additional exponents (e.g., third exponent, fourth exponent, etc.), each of which is associated with a corresponding one or more additional bases (e.g., third base, fourth base, etc.). In some embodiments, conducting deep neural network training on the set of training data may involve using a predetermined third base, with the predetermined second base as
and the predetermined third base as
In some embodiments, at least one of the one or more additional bases are optimized for the multi-dimensional logarithmic data representation.
In some embodiments, the exponents of the bases are integer values. In some embodiments, the first exponent and the second exponent are opposite in polarity. In some embodiments, the first exponent and the second exponent are fractional values. In some embodiments, the predetermined second base is selected from the group consisting of: √{square root over (2)},
Another aspect of the invention provides hardware accelerators that may be employed on edge devices to perform the methods described herein. The hardware accelerator includes: a multidimensional logarithmic number system (MDLNS) converter connected to a memory of the computing device and a cache of the hardware accelerator; processing units arranged in an array of a first number of rows and a second number of columns to collectively form a processing core; and a microcontroller connected to the processing core and the MDLNS converter. The MDLNS converter may be configured to create an MDLNS representation of a set of data received from the memory of the computing device and to store the MDLNS representation in the cache of the hardware accelerator. The MDLNS representation may use a first exponent associated with a binary base and a second exponent associated with a non-binary base.
In some embodiments, the processing unit of the hardware accelerator comprises a first adder operating in the binary base and a second adder operating in the non-binary base. The processing unit may optionally comprise an aggregate adder connected to the first adder and the second adder, the aggregate adder having aggregation channels that correspond to unique combination of pairs (N,M) defined by the number of bits of the first exponent and the number of bits of the second exponent. The aggregate adder may optionally include 2N+M up-counters operating in parallel for aggregating a unique (N, M) combination of exponents.
In some embodiments, the processing units of the processing core are configured as a systolic array of matrix-vector multiply units. In some embodiments, the second base is 2ω, where ω=(1+sqrt(5))/2. In some embodiments, the hardware accelerator comprises multiple processing tiles that are connected to other processing tiles by way of a network on chip. Each of the processing tiles may include a number of processing cores described above.
Hardware accelerators described herein may be used in computing devices, such as edge computing devices, to conduct deep neural network inference, incorporating the use of logarithmic data representation for increased computational efficiency and reduced power consumption.
Additional aspects of the present invention will be apparent in view of the description which follows.
Features and advantages of the embodiments of the present invention will become apparent from the following detailed description, taken with reference to the appended drawings in which:
The description, which follows, and the embodiments described therein, are provided by way of illustration of examples of particular embodiments of the principles of the present invention. These examples are provided for the purposes of explanation, and not limitation, of those principles and of the invention.
Described herein are methods and apparatus incorporating the use of data representation based on multi-dimensional logarithmic number systems for hardware acceleration of inner product computations in neural networks such as convolutional neural networks (CNNs). Applications for these methods and apparatus include neural network training and inference calculations. However, any device that requires low-power, low-area and fast inner product computational units can benefit from the methods and apparatus described herein. Embodiments of the invention can be incorporated into accelerators that can be used for computer vision, artificial intelligence (AI) applications, image compression, voice recognition, machine learning, or other applications in edge devices (e.g., robots, drones, surveillance equipment, Augmented Reality (AR) products, Virtual Reality (VR) products, self-driving vehicles, smartphones, wearable devices and the like).
The classical, one-dimensional logarithmic number system (LNS) has various uses in low-power, low-precision digital signal and image processing applications. It is used in the area of digital filtering (such as finite impulse response (FIR), infinite impulse response (IIR) and adaptive filters) as well as for implementing signal transforms. The mechanical version of the LNS is the well-known slide-rule.
LNS can be summarized as follows: it converts multiplications and divisions into additions and subtractions. Additions and subtractions are implemented through look-up tables (LUT) and extra additions. One critical drawback is the size of the LUTs, which tend to grow exponentially with the dynamic range of the computations. As a result, in general it is practical to use LNS only in applications requiring low-precision (e.g. 8-16 bits dynamic range). The removal of multipliers from the overall inner product architecture typically results in low-power implementation, which is a desirable feature for mobile applications.
Low-precision computation is advantageous to speed up calculations associated with applications such as machine learning (e.g., deep learning, continual learning with or without weights updated during training time and after deployment, reinforcement learning, etc.), artificial intelligence, 3D imaging, AR/VR simulations and the like. Investigations of logarithmic representations as an alternative to standard floating-point representation have yielded promising results for these applications, including a massive reduction of power consumption. By contrast, the use of floating-point representation provides an unnecessarily large dynamic range for the computations in such applications, and therefore leads to much slower calculations, and higher power consumption.
The Multidimensional Logarithmic Number System (MDLNS) can be viewed as a two-dimensional extension of LNS. In MDLNS, a real number, x, can be encoded as: x=s*D1a*D2b, where (D1, D2) is a set of two, multiplicatively independent bases (which could be real or even complex) and (a,b) is a set of two integers, where s=+1, if x is positive, and s=−1, if x is negative. A simple geometric interpretation for this representation is the two-dimensional slide rule 10, as seen in
Some of the main differences between LNS and MDLNS are outlined in Table 1 below:
It should be noted that LNS is a partial case of MDLNS, if the second base, D2, is selected as the number one. In applications where LNS provides an attractive practical performance, MDLNS can be adapted to provide the same advantages as LNS.
As seen in Table 1, MDLNS includes features that do not have any analogue in LNS. As such, MDLNS can be exploited by making more efficient use of such features to provide computational advantages over LNS. For example, MDLNS offers exponentially faster conversion from logarithmic to binary format. In LNS, the conversion from logarithmic to binary format is accomplished by either the use of large LUTs or the use of a special circuit that implements the function f(x)=2x. In MDLNS, the conversion can be performed considerably faster if the powers of the second base, D2, are stored in a floating-point manner (e.g., D2exp=1.ddddd*2eeeee) for every possible value of the exponent exp.
One difference between MDLNS and classical LNS (or floating-point arithmetic) is the existence of non-trivial approximations of unity (e.g., numbers of the form 2exp1*D2exp2 that are very close to 1). The examples below illustrate how these approximations of unity can be used advantageously to guard against computational overflow.
In one example, the bases of a particular MDLNS are D1=2 and D2=3. In this example, good approximations of unity include numbers such as: 28*3−5, 219*3−12, 284*3−53, etc. Illustratively, 2a*3b (i.e., with a2+b2>0, a and b as integers) can be arbitrarily close to one, if no restrictions are imposed on the bit-size of the pair of exponents (a, b), since 2 and 3 are multiplicatively independent (i.e., the number log2 3 is irrational).
As another example, x2, where x=(180,−115), is computed by making use of 9-bit fixed-point binary arithmetic. The real value of x is approximately 0.207231. If x2 is computed directly, then the result, (360,−230), will produce an overflow error in the case of 9-bit signed fixed-point arithmetic. However, good approximations of unity provided by MDLNS offers an optimization option that mitigates the overflow problem. Notably, the optimization option does not have an analogue in the one-dimensional logarithmic number system (1DLNS) nor in the floating-point binary arithmetic. If x is multiplied by the number encoded in two-dimensional logarithmic number system (2DLNS) as (−84,53) (i.e., a number that is very close to unity), then the error associated with this scaling is very small and the size of the exponents can be reduced to (96,−62). This allows the squaring operation to be safely performed within the 9-bit fixed-point dynamic range, yielding the final answer of (192,−164). Illustratively, reducing the size of the numbers used to a smaller, overflow-free range, can offer enormous computational options.
Standard computational procedures like standard multiplication can also be performed within 2DLNS. As an example, 41 can be multiplied with 109 by making use of 2DLNS with bases D1=2, D2=2.0228). With this selection of bases, the number 41 is encoded like (−17, 22) and the number 109 is encoded like (21,−14). The addition of the exponents, component-wise, produces the pair (4,8). To obtain the number encoded by the pair, a small LUT containing the powers of D2 (i.e., encoded like: 1.ddddd*2eeeee) can be used. In the illustrated example, D28=1.0001100001 . . . . *28 is multiplied by 24 to obtain 1.0001100001 . . . *212 corresponding to 4485 in base 10. The correct product of 41 multiplied by 109 is 4469 in base 10.
Aspects of the invention relate to systems and methods that use MDLNS, such as 2DLNS described above, to provide multi-dimensional logarithmic data representations for performing computations. Illustratively, using multi-dimensional logarithmic representations of data can increase computational efficiency for applications such as image compression, image or voice recognition, machine learning, etc., performed by edge computing devices.
As seen in
Edge computing device 12 is typically located relatively proximate to local devices 2 to reduce latency associated with data transmission between local devices 2 and edge computing device 12. For example, edge computing device 12 may be installed on top of a wind turbine to receive and process data collected from a local sensor 2 of the wind turbine. As another example, edge computing device 12 may be installed on top of a traffic light to receive and process data transmitted from an autonomous vehicle 2. In some embodiments, edge computing device 12 is physically located at and/or forms a part of local device 2 (i.e., local device 2 may comprise an edge computing device 12).
Edge computing device 12 may, in some cases, be required to run deep neural networks. For example, edge computing device 12 may employ deep neural networks to execute AI applications such as image or video recognition, voice recognition, recommender engines, natural language processing, medical image analysis, etc. In such cases, edge computing device 12 may be configured to assign computational tasks associated with running the neural network to one or more of its hardware accelerators 20. In some embodiments, edge computing device 12 comprises hardware accelerators 20 that are programmable or otherwise custom designed to perform matrix-vector multiplications and/or inner product computations. Although not necessary, hardware accelerator 20 typically incorporates a tiled-based architecture. Illustratively, hardware accelerators 20 may perform such computations in a manner that is more computationally efficient compared to using traditional central processing units (CPUs) or graphical procession units (CPUs).
For the purposes of facilitating the description, an assembly of processing elements 22 arranged in an array configuration (e.g., as described above) may be referred to herein as a processing core 40. In the
For the purposes of facilitating the description, an assembly of processing cores 40 may be referred to herein as a processing tile. Each processing tile comprises a suitable number, including, for example, any number in the range of two to sixteen (e.g., 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, or 16) of processing cores 40. For example, in one example embodiment, a single processing tile of hardware accelerator 20 may comprise eight (8) processing cores 40 and each processing core 40 may comprise sixty-four (64) processing elements 22 arranged in a square array of eight (8) rows and eight (8) columns. Hardware accelerator 20 may comprise any suitable number of processing tiles, depending on the processing power that is required by edge computing device 12. For many applications, edge computing devices 12 comprise hardware accelerators 20 having between sixty four (64) to five hundred and twelve (512) processing tiles.
Processing cores 40 and the processing units 22 contained therein are controlled by one or more microcontrollers 24 of hardware accelerator 20. Microcontroller 24 may be implemented using one or more of: specifically designed hardware, configurable hardware, programmable data processors configured by the provision of software or firmware capable of executing on the data processors, and special purpose data processors that are specifically programmed, configured, or constructed to control processing units 22 in accordance with methods described herein.
In some embodiments, microcontroller 24 is a reduced instruction set computer (RISC) microcontroller. In such embodiments, microcontroller 24 may comprise one or more of: data memory, instruction memory, program counter, registers, control circuits, and input/output devices.
In some embodiments, each processing core 40 is controlled by its own microcontroller 24. In other embodiments, a single microcontroller 24 of hardware accelerator 20 controls two or more processing cores 40. For example, all of the processing cores 40 that form a processing tile of hardware accelerator 20 may be controlled by a single microcontroller 24.
Microcontroller 24 communicates with processing units 22 and a data memory 30 of hardware accelerator 20 to perform computational tasks that have been assigned to hardware accelerator 20 (e.g., tasks assigned by central processor 14 of edge computing device 12). For example, microcontroller 24 may be configured to provide a load instruction that loads data stored in memory 30 to processing unit 22. The load instruction may be performed in a clock cycle defined by a local clock 26 of hardware accelerator 20. Once the data is loaded to processing unit 22, an arithmetic instruction (e.g., addition, subtraction, multiplication, divisional) provided by microcontroller 24 in the next clock cycle can be performed on the data loaded in the processing unit 22.
Unlike traditional computer architecture (e.g., Von-Neumann based architecture) which require the output data of a processing unit to be stored in memory immediately after an arithmetic operation has been performed, the architecture of processing core 40 and processing units 22 contained therein allows a series of arithmetic operations to be performed before storing the final data is output to memory.
In the example illustrated in
In some embodiments, memory 30 is implemented using Static Random Access Memory (SRAM) or other suitable storage technologies to facilitate concurrent load operations and store operations. That is, memory 30 may be implemented using storage technologies that enable one or more sets of data to be loaded to one or more processing units 22 in the same clock cycle as other one or more sets of data (i.e., from other one or more processing units) being stored to memory 30. For example, memory 30 may be implemented using 8 T SRAM. Optionally, memory 30 may be pitch matched to the execution speed of processing units 22.
Illustratively, the architecture of hardware accelerator 20 is globally asynchronous, but locally synchronous to allow each processing core 40 to be operated independently of each other. In embodiments where each processing core 40 comprises its own local clock 26, each processing core 40 can be sped up or slowed down as needed by, for example, microcontroller 24. In other embodiments, a processing tile may comprise a single clock 26 that synchronizes the processing of the processing cores 40 contained therein. The architecture of hardware accelerator 20 avoids the need for a global clock tree which can consume large amounts of dynamic energy and a large area.
In some embodiments, different processing tiles of hardware accelerator 20 are connected to each other by way of a Network on Chip (NoC) 50. NoC 50 may be dataflow reconfigurable to enable more flexibility, while keeping the power consumption of hardware accelerator relatively low.
In some embodiments, processing units 22 are multiply units that are designed or otherwise configured to perform multiplication operations on input data. In the example illustrated in
In the
As illustrated in
In some cases, it is desirable to multiply a large set of numbers together to find an aggregate product. For example, a large number of multiplication operations may be required when performing inner product computations, matrix multiplication and/or calculations of the type that are commonly found in machine learning and AI applications. In such cases, processing unit 22 may comprise an aggregate adder 70 configured to add a large number of MDLNS numbers to find an aggregate sum thereof.
Aggregate adder 70 comprises separate and dedicated aggregation channels for every 2N+M different combinations of 2-tuples (N,M). In some embodiments, aggregate adder 70 comprises 2N+M parallel up-counters, each configured to aggregate a unique (N,M) combination of values. Each up-counter may be a simple digital counter comprising a plurality of D-flops. The D-flops may be connected such that, for each up-counter, the clock input of the D-flop at location n (i.e. “F(n)) is connected to the output of the D-flop at location (n−1) for n=0, 1, . . . , U, where U is the number of bits in the counter. In some embodiments, the first D-flop at n=0 is clocked by a master clock (e.g., clock 26 of processing core 40) running at a desirable clock speed “f” defined by the architecture of hardware accelerator 20.
The outputs of the up-counters are channelized partial sums which must be converted (e.g. by MDLNS converter 40) to a number system (e.g. fixed-point) that is recognized by edge computing device 12. In situations where a number P of MDLNS numbers must be aggregated, the final output of processing unit 22 is only computed after P clock cycles of clock 26. For every P cycles, the up-counter values are scaled by (D1M)*(D2N). The up-counter values may, in some cases, be summed thereafter by a fixed-point adder of hardware accelerator 20. The fixed-point adder may, in some cases, be embodied as a part of MDLNS Converter 40.
Illustratively, the high-precision fixed-point addition and final reconstruction step (FRS) implemented by the fixed-point adder to map up-converter values to fixed-point can be performed at a reduced rate (e.g. the rate of f/P Hz). For typical machine learning structures such as convolutional neural network (CNN), P can be a number in the range 10,000 to 100,000 or more. For such applications, the FRS step may be implemented in software using, for example, a suitable embedded processor core of edge computing device 12. In some embodiments, hardware accelerator 20 comprises a series of progressive down-sampling integrators configured to convert the MDLNS up-counter values via FRS to fixed-point. The series of progressive down-sampling integrators can be operated in association with processing units 22 to determine a suitable trade-off between speed, power and chip-area of hardware accelerator 20.
In some embodiments, aggregate adder 70 comprises fixed-point adders with barrel shifters in addition to or in place of some or all of the up-counters. Such fixed-point adders may, for example, be implemented to compute the 2N terms, thereby reducing number of aggregation channels from 2N+M to 2M.
In some embodiments, processing cores 40 and the processing units 22 contained therein are configured as systolic array matrix-vector multiply units that are connected to a single accumulator. In such embodiments, each processing unit 22 is or otherwise functions as an MDLNS multiplier unit that may be operated to compute a partial result of dot product computations and/or matrix multiplications. Illustratively, configuring the processing units 22 as a systolic array can provide an ordered dataflow and/or allow processing core 40 to exploit properties such as weight stationary and/or output stationary to increase efficiency and/or throughput of hardware accelerator 20.
In some embodiments, hardware accelerator 20 comprises a nonlinearity and reduction unit for handling activation functions and/or pooling. Activation functions are nonlinear functions in a neural network, such as a convolutional network (CNN), that define how the result of a matrix multiplication (i.e., a weighted sum) is transformed into an output. Examples of activation functions include, not are not limited to, the ReLU activation function, the leaky ReLU activation function, the Sigmoid activation function, the Softplus activation function, or any other differentiable nonlinear function. The nonlinearity and reduction unit may be designed or otherwise configured to apply a suitable activation function to the result of a matrix multiplication (e.g., “AX+b” (matrix-matrix) “Ax+b” (matrix-vector)) performed by processing core 40.
Methods that may be implemented by hardware accelerators 20 to increase the computational efficiency of deep neural network operations are described below with reference to
Method 200 begins at block 202 by accepting a set of data inputs 201 (training data) and representing the set of data inputs 201 in an MDLNS comprising a first base and a second base. After representing the set of data in the MDLNS, block 202 proceeds by running training of the deep neural network on the inputs 201. As described above, a real number, x, can be encoded in MDLNS as: x=s*D1a*D2b, where (D1, D2) are two, multiplicatively independent bases and (a,b) is a set of two integers. Block 202 may comprise encoding or otherwise representing a real number, x, using any one of several different possible MDLNSs. For example, block 202 may comprise encoding or otherwise representing a real number, x, using any one of the following: 2DLNS with bases (2, 2ω) where ω is the golden ratio, 3DLNS with bases (2, D2, D3) where
and D3=D2{circumflex over ( )}2, 3DLNS with non-negative binary exponents and non-positive second-base exponents, MDLNS with non-positive binary exponents and non-negative second-base binary exponents, MDLNS with fractional exponents for the binary and non-binary bases, MDLNS with bases (2, √{square root over (2)}, D) and integer exponents, MDLNS with bases (2,
D) and integer exponents, MDLNS with specific optimized bases for different dynamic ranges, 3DLNS with optimized non-binary bases for different dynamic ranges, and MDLNS with optimized second bases.
Training at block 202 may be performed by inputting numbers represented in any one of the 2DLNS or MDLNS described above to hardware accelerator 20. Illustratively, golden-ratio base 2DLNS may be used in block 202 to provide an initial encoding of weight coefficients and perform dot-product calculations (i.e., using hardware accelerator 20 by taking advantage of the tile-based architecture of processing cores 40). This has the increased computational efficiencies noted above (e.g. exponentially faster conversion from logarithmic to binary format and reduced exponent size). Training at block 202 results in a determination of an initial set of weight coefficients 203 of the deep neural networks.
After determining the initial set of weight coefficients 203 of the deep neural networks, method 200 proceeds to block 204. At block 204, an optimization procedure is applied to determine the optimal second base 205 in 2DLNS or MDLNS using the initial set of coefficients 203. The optimization procedure at block 204 may, in some cases, involve determining the second base 205 which results in the minimal mean square error for a fixed first base (e.g., first base is 2 in particular embodiments). In one embodiment, a mixed-integer global optimization procedure is used to find the optimal bases and the values of the exponents, given the dynamic range of the exponents. The mixed-inner global optimization procedure may be implemented using mixed-integer optimization algorithms for MDLNS with integer exponents and non-binary bases that are real numbers.
Finally, method 200 proceeds to block 206. At block 206, inference is carried out on a set of inputs 207 using the newly determined optimal second-base 205 for 2DLNS. For example, weight coefficients may be represented in 2DLNS using the optimal second base. The inference calculations result in a set of outputs 208. The use of optimal second-base 2DLNS for inference computations by hardware accelerators 20 can result in increased computational efficiency (e.g. exponentially faster conversion from logarithmic to binary format and reduced exponent size), which enables inference to be carried out on edge devices (which are limited by their size or power) or other devices that require low-power, low-area and/or fast inner product computational units. Conversion from binary to logarithmic representation can be obtained via precomputed look-up tables (LUTs). For example, one LUT (containing either 256 words or 2K, respectively) can be used for 8 or 12 bit dynamic ranges.
In addition to the exemplary aspects described above, the present invention is described in the following examples, which are set forth to aid in the understanding of the invention, and should not be construed to limit in any way the scope of the invention as defined in the claims which follow thereafter.
The following example in the field of digital hearing aids is aimed at demonstrating: a) the importance of the selection of the second base in MDLNS; and b) the importance in the number of digits, realized by an exponential reduction of the exponent size based on the use of two-digit MDLNS approximations.
Consider the 53-tap FIR filter with the following coefficients (coefficients 28-53 are a mirror of 1-26 to guarantee linear phase). The filter is used in digital hearing aids and its objective is to ensure a stop-band attenuation of over 80 dB.
Table 3 above reveals that MDLNS enables a massive reduction of the size of the exponent. Indeed, with ideal (infinite precision) coefficients, a stopband attenuation of −85.362 dB is achieved. With one-digit MDLNS and 9 digits for the exponents and optimal-base of x=0.7278946656, a stopband attenuation of −80.315 dB is achieved. In the case of 2-digit MDLNS, with only three bits of the exponents used and optimal base x=0.735254518, a stopband attenuation of −81.562 dB is achieved. Thus, the combination of the optimization of the second base and the use of a two-digit representation allows one to avoid the main drawback of classical LNS, namely, the need for large LUTs to execute the difficult (for LNS) arithmetic operations. The role of accurate selection of the second base is underscored by a comparison with the use of a random (non-optimized) second base, which results in a stopband attenuation of approximately −40 dB.
Embodiments of the invention incorporate a two-dimensional logarithmic number system (2DLNS) wherein the selection of the second base D2 is made by considering an optimal-on-average base selection. First, consider some particularly poor choices for D2 (assuming that the first base, D1=2). For example, suppose D2=sqrt(2). In this case, the even powers of D2 are perfect powers of 2 and therefore we will have many numbers with very bad approximations. If we consider some other (both non-binary bases) bases, such as, for example, (19,83), then we may notice one specific phenomenon that ought to be avoided. Indeed, 19−3*832=1.00437 . . . , so this particular pair would also be considered as bad. If one considers the numbers of the form 19a*83b, (wherein a, b are integers) then they will appear in clusters. Numbers ‘within’ a cluster will have very good approximations, whereas numbers outside the clusters will have poor approximations, unless one uses extremely large exponents (a and b), which is impractical. Therefore, the theoretical restriction for the bases to be multiplicatively independent is necessary but not a sufficient condition in the selection of the bases for optimal computational performance.
Thus, a ‘good’ second base would be one such that log_2(D) (logarithm of D in base 2) should be a badly approximable irrational number. Since the number that is known to be the ‘worst’ in terms of its rational approximation is the golden ratio (ω=(1+sqrt(5))/2=1.618 . . . ), then a very good (universally) second base would be D=2ω=21.618 . . . =3.069 . . . . For convenience, we may consider base D=D/2. Indeed, this base works well and if an adaptive digital filter (for example) is implemented with 2DLNS, then using bases (2, 2ω) seems computationally optimal and can be selected if one wants a good (in the average sense) performance. This can be referred to as the optimal-on-average base selection. If the first base is not 2, then the general rule for selecting the optimal-on-average second base is: D2=D1ω.
Experiments were conducted by the inventors to find the optimal second base for the matrix-multiplication task that would be tested with MDLNS (specifically, 2DLNS in this example). For image understanding applications, the main computational operation is Wx+b, where W is a matrix (non-square) and x and b are vectors. Wx+b obeys the Gaussian distribution law located from −2 to +2.
Based on the foregoing, the interval [−2, 2] is divided into 256 equally spaced intervals and each of the 256 numbers is approximated into the form 2{circumflex over ( )}a*D{circumflex over ( )}b. The mean square error (MSE) with respect to D is minimized, assuming that every number in this interval has a weight provided by the Gaussian distribution. During supervised training, inputs are provided to the deep neural network, and the network output is compared to the target output. Error is measured as the difference between the target output and the network output. It is desirable to minimize the average of the sum of these errors (mean square error). For every particular exponent, and a fixed first base (which is fixed at 2, in the example embodiment) the optimal second base is derived for which the MSE is minimized. The following Table 4 illustrates the outcome:
The optimal second base has to be located in the interval [1/sqrt(2), sqrt(2)], which explains the numbers that are seen for the optimal second base in Table 4 above, calculated with 5 decimal digits precision. As seen in Table 4, the mean square error decreases as a function of the number of bits for the non-binary exponents.
Table 5 below shows a numerically obtained data comparison between (a) specifically optimized second-bases (different for every bit-size given) with a first base of 2; (b) bases (2,3) 2DLNS; and (c) optimal-on-average bases (2, 2ω) 2DLNS about the mean square error of the input data approximations. The data are assumed to obey Gaussian distribution located between [−2, 2].
Several important conclusions can be observed from the above Table 5. If one uses a specifically optimized second-base, the mean square error is improved by a factor of more than two every step of the way, as we add an additional bit to the exponents; with the selection of the optimal-on-average bases (2, 2ω), one obtains a very robust reduction of the error, by a factor slightly larger than 2; and this selection of bases is almost always better in comparison to the bases (2, 3) 2DLNS, with one exception (6-bits exponents).
In digital signal processing, there is a fundamental difference between the use of MDLNS in FIR/IIR filtering and adaptive filtering. The foregoing discussion underscores the importance of carefully selecting the second base to secure a highly efficient MDLNS inner product architecture, including very small exponents, very small LUTs and very small adders. On the other hand, if one selects the second base randomly, then the performance of the FIR/IIR architecture will be substantially sub-optimal.
In the case of adaptive filtering, selection of an optimal second base is not an option, since the coefficients of the filter changes iteratively, depending on the adaptive filtering algorithm. Thus, one can use the optimal-on-average technique for selection of the second base, as discussed above.
The foregoing concepts can also be applied to inner product computations with data representation using three-dimensional logarithmic number systems (3DLNS). For optimal-on-average bases selection, we have to look for a ‘badly approximable pair of real numbers’. While there have been less investigations in this area, some explicit estimations for a pair of irrational numbers that cannot be approximated as well as rational numbers are discussed in T. W. Cusick, The two-dimensional Diophantine approximation constant—II, Pacific Journal of Mathematics, vol. 105, pp. 53-67, 1983. Cusick's results have been extended by Keith Briggs in Some explicitly badly approximable pairs <arxiv.org/pdf/math/0211143.pdf>, Oct. 25, 2018, and the pair that he has found to be particularly difficult to approximate as a pair of rational numbers is: (α, α2), where α=cos(2*pi/7) is known to be a very good candidate. Therefore, the triple of bases (2, 2α, 2α{circumflex over ( )}2)=(2, 1.18671, 1.469117) is a good practical choice for an optimal-on-average bases selection in the case of 3DLNS applications.
The numbers in the table below were obtained using mixed-integer optimization techniques (wherein the exponents are integers and the non-binary bases are real numbers).
The examples and corresponding diagrams used herein are for illustrative purposes only. Different configurations and terminology can be used without departing from the principles expressed herein.
Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the scope of the invention. The scope of the claims should not be limited by the illustrative embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole. For example, various features are described herein as being present in “some embodiments”. Such features are not mandatory and may not be present in all embodiments. Embodiments of the invention may include zero, any one or any combination of two or more of such features. This is limited only to the extent that certain ones of such features are incompatible with other ones of such features in the sense that it would be impossible for a person of ordinary skill in the art to construct a practical embodiment that combines such incompatible features. Consequently, the description that “some embodiments” possess feature A and “some embodiments” possess feature B should be interpreted as an express indication that the inventors also contemplate embodiments which combine features A and B (unless the description states otherwise or features A and B are fundamentally incompatible).
This application claims priority from U.S. Patent Application No. 63/109,136 filed on Nov. 3, 2020 entitled “MULTI-DIMENSIONAL LOGARITHMIC NUMBER SYSTEM PROCESSOR FOR INNER PRODUCT COMPUTATIONS”. This application claims the benefit under 35 U.S.C. § 119 of U.S. Patent Application No. 63/109,136 filed on Nov. 3, 2020 entitled “MULTI-DIMENSIONAL LOGARITHMIC NUMBER SYSTEM PROCESSOR FOR INNER PRODUCT COMPUTATIONS”, which is incorporated herein by reference in its entirety for all purposes.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2021/051564 | 11/3/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63109136 | Nov 2020 | US |