Digital correlators incorporating analog neural network structures operated on a bit-sliced basis

Information

  • Patent Grant
  • 5115492
  • Patent Number
    5,115,492
  • Date Filed
    Friday, December 14, 1990
    33 years ago
  • Date Issued
    Tuesday, May 19, 1992
    32 years ago
Abstract
Plural-bit digital input signals to be subjected to weighted summation are bit-sliced; and a number N of respective first through N.sup.th weighted summations of the bits of the digital input signals in each bit slice are performed, resulting in a respective set of first through N.sup.th partial weighted summation results. Each weighted summation of a bit slice of the digital input signals is performed using a capacitive network that generates partial weighted summation results in the analog regime; and analog-to-digital conversion circuitry digitizes the partial weighted summation results. Weighted summations of the digitized partial weighted summation results of similar ordinal number are then performed to generate first through N.sup.th final weighted summation results in digital form, which results are respective correlations of the pattern of the digital input signals with the patterns of weights established by the capacitive networks. A neural net layer can be formed by combining such weighted summation circuitry with digital circuits processing each final weighted summation result non-linearly, with a system function that is sigmoidal.
Description

The invention relates to analog computer structures that perform correlations of sampled-data functions and can be incorporated into neural networks which emulate portions of a brain in operation, and more particularly, to adapting such analog computer structures for use with digital electronic circuits.
BACKGROUND OF THE INVENTION
Computers of the von Neumann type architecture have limited computational speed owing to the communication limitations of the single processor. These limitations can be overcome if a plurality of processors are utilized in the calculation and are operated at least partly in parallel. This alternative architecture, however, generally leads to difficulties associated with programming complexity. Therefore, it is often not a good solution. Recently, an entirely different alternative that does not require programming has shown promise. The networking ability of the neurons in the brain has served as a model for the formation of a highly interconnected set of analog processors, called a "neural network" or "neural net" that can provide computational and reasoning functions without the need of formal programming. The neural nets can learn the correct procedure by experience rather than being preprogrammed for performing the correct procedure. The reader is referred to R. P. Lippmann's article "An Introduction to Computing With Neural Nets" appearing on pages 4-21 of the April 1987 IEEE ASSP MAGAZINE (0740-7467/87/0400-0004/$10.00 April 1987 IEEE), incorporated herein by reference, for background concerning neural nets.
Neural nets are composed of a plurality of neuron models, processors each exhibiting "axon" output signal response to a plurality of "synapse" input signals. In a type of neural net called a "perceptron", each of these processors calculates the weighted sum of its "synapse" input signals, which are respectively weighted by respective weighting values that may be positive- or negative-valued, and responds non-linearly to the weighted sum to generate the "axon" output response. This relationship may be described in mathematical symbols as follows. ##EQU1##
Here, i indexes the input signals of the perceptron, of which there are an integral number M, and j indexes its output signals, of which there are an integral number N. W.sub.i,j is the weighting of the i.sup.th input signal as makes up the j.sup.th output signal at such low input signal levels that the function ##EQU2## is approximately linear. At higher absolute values of its argument, the function ##EQU3## no longer exhibits linearity but rather exhibits a reduced response to ##EQU4## This type of non-linear response is termed "sigmoidal". The weighted summation of a large number of sampled-data terms can be viewed as the process of correlating the sampled-data function described by those sampled-data terms with the sampled-data function described by the pattern of weights; and the analog processor used as a neuron model in a "perceptron" can be viewed as a correlator with non-linear circuitry exhibiting a sigmoidal response connected thereafter.
A more complex artificial neural network arranges a plurality of perceptrons in hierarchic layers, the output signals of each earlier layer providing input signals for the next succeeding layer. Those layers preceding the output layer providing the ultimate output signal(s) are called "hidden" layers.
In the present-day development of the integrated electronic circuitry art, the weighted summation of a large number of terms, each of which has resolution that would require plural-bit digital sampling, can be done appreciably faster and at less cost in integrated circuit die area by processing in the analog regime rather than in the digital regime. Using capacitors to perform weighted summation in accordance with Coulomb's Law provides neural nets of given size operating at given speed that consume less power than those the analog processors of which use resistors to implement weighted summation in accordance with Ohm's Law. Y. P. Tsividis and D. Anastassion in a letter "Switched-Capacitor Neural Networks" appearing in ELECTRONICS LETTERS, Aug. 27th 1987, Vol. 23, No. 18, pages 958,959 (IEE) describe one method of implementing weighted summation in accordance with Coulomb's Law. Their method, a switched capacitor method, is useful in analog sampled-data neural net systems. Methods of implementing weighted summation in accordance with Coulomb's Law that do not rely on capacitances being switched and avoid the complexity of the capacitor switching elements and associated control lines are also known.
U.S. patent application Ser. No. 366,838 entitled "NEURAL NET USING CAPACITIVE STRUCTURES CONNECTING INPUT LINES AND DIFFERENTIALLY SENSED OUTPUT LINE PAIRS" describes a type of neural net in which each analog synapse input signal voltage drives a respective input line from a low source impedance. Each input line connects via a respective weighting capacitor to each of a plurality of output lines. The output lines are paired, with the capacitances of each pair of respective weighting capacitors connecting a pair of output lines to one of the input lines summing to a prescribed value. A respective pair of output lines is associated with each axonal output response to be supplied from the neural net, and the differential charge condition on each pair of output lines is sensed to generate a voltage that describes a weighted summation of the synapse input signals supplied to the neural net. A respective operational amplifier connected as a Miller integrator can be used for sensing the differential charge condition on each pair of output lines. Each weighted summation of the synapse input signals is then non-linearly processed in a circuit with sigmoidal transfer function to generate a respective axonal output response. This type of neural net is particularly well-suited for use where all input synapse signals are always of one polarity, since the single-polarity synapse input signals may range over the entire operating supply.
U.S. patent application Ser. No. 366,839 entitled "NEURAL NET USING CAPACITIVE STRUCTURES CONNECTING OUTPUT LINES AND DIFFERENTIALLY DRIVEN INPUT LINE PAIRS" describes a type of neural net in which each analog synapse input signal voltage is applied in push-pull from low source impedances to a respective pair of input lines. Each pair of input lines connect via respective ones of a respective pair of weighting capacitors to each of a plurality of output lines. The capacitances of each pair of respective weighting capacitors connecting a pair of input lines to one of the output lines sum to a prescribed value. Each output line is associated with a respective axonal output response to be supplied from the neural net, and the charge condition on each output line is sensed to generate a voltage that describes a weighted summation of the synapse input signals supplied to the neural net. A respective operational amplifier connected as a Miller integrator can be used for sensing the charge condition on each output line. Each weighted summation of the synapse input signals is then non-linearly processed in a circuit with sigmoidal transfer function to generate a respective axonal output response. This type of neural net is better suited for use where input synapse signals are sometimes positive in polarity and sometimes negative in polarity.
U.S. Pat. No. 5,039,871, issued Aug. 13, 1991 to W. E. Engeler, entitled "CAPACITIVE STRUCTURES FOR WEIGHTED SUMMATION, AS USED IN NEURAL NETS" and assigned to General Electric Company describes preferred constructions of pairs of weighting capacitors for neural net layers, wherein each pair of weighting capacitors has a prescribed differential capacitance value and is formed by selecting each of a set of component capacitive elements to one or the other of the pair of weighting capacitors. U.S. Pat. No. 5,039,870, issued Aug. 13, 1991 to W. E. Engeler, entitled "WEIGHTED SUMMATION CIRCUITS HAVING DIFFERENT-WEIGHT RANKS OF CAPACITIVE STRUCTURES" and assigned to General Electric Company describes how weighting capacitors can be constructed on a bit-sliced or binary-digit-sliced basis. These weighting capacitor construction techniques are applicable to neural nets that utilize digital input signals, as will be presently described, as well as being applicable to neural nets that utilize analog input signals.
The neural nets as thusfar described normally utilize analog input signals that may be sampled-data in nature. A paper by J. J. Bloomer, P. A. Frank and W. E. Engeler entitled "A Preprogrammed Artificial Neural Network Architecture in Signal Processing" published in December 1989 by the GE Research & Development Center describes the application of push-pull ternary samples as synapse input signals to neural network layers, which push-pull ternary samples can be generated responsive to single-bit digital samples.
U.S. patent application Ser. No. 546,970 filed Jul. 2, 1990 by W. E. Engeler, entitled "NEURAL NETS SUPPLIED SYNAPSE SIGNALS OBTAINED BY DIGITAL-TO-ANALOG CONVERSION OF PLURAL-BIT SAMPLES" and assigned to General Electric Company describes how to process plural-bit digital samples on a digit-slice basis through a neural net layer. Partial weighted summation results, obtained by processing each bit slice through a neural net layer, are combined in final weighted summation processes to generate final weighted summation results. The final weighted summation results are non-linearly amplified to generate respective axonal output responses. After the weighted summation and non-linear amplification procedures have been carried out in the analog regime, the axonal output responses are digitized, if digital signals are desired in subsequent circuitry.
U.S. patent application Ser. No. 561,404 filed Aug. 1, 1990 by W. E. Engeler, entitled "NEURAL NETS SUPPLIED DIGITAL SYNAPSE SIGNALS ON A BIT-SLICE BASIS" and assigned to General Electric Company describes how to process plural-bit digital samples on a bit-slice basis through a neural net layer. Partial weighted summation results, obtained by processing each successive bit slice through the same capacitive weighting network, are combined in final weighted summation processes to generate final weighted summation results. The final weighted summation results are non-linearly amplified to generate respective axonal output responses. After the weighted summation and non-linear amplification procedures have been carried out in the analog regime, the axonal output responses are digitized, if digital signals are desired in subsequent circuitry. Processing the bit slices of the plural-bit digital samples through the same capacitive weighting network provides good guarantee that the partial weighted summation results track are scaled in exact powers of two respective to each other.
The invention herein described differs from that described in U.S. patent application Ser. No. 561,404 in that the final weighted summation processes used to combine partial weighted summation results, obtained by processing each bit slice through a neural net layer, are carried out in the digital, rather than the analog, regime to generate final weighted summation results. That is, the sampled-data function described by the digital input signals is correlated with the sampled-data function described by each pattern of weights established with the weighting capacitors to generate a respective digital correlation signal. Performing the final weighted summation processes in the digital regime avoids the need for a further array of capacitive structures for performing the final weighted summation; a digital accumulator circuit is used instead, which tends to be more economical of area on a monolithic integrated-circuit die. Also, performing the final weighted summation process in the digital regime has high accuracy since it avoids undesirable non-monotonic non-linearities that can be introduced by inaccuracies in the scaling of the capacitances of weighting capacitors when performing the final weighted summation in the analog regime as described in U.S. patent application Ser. No. 561,404. Performing the final weighted summation processes in the digital regime is particularly advantageous in large neural net layers formed using a plurality of monolithic integrated circuits for processing a set of synapse input signals, since corruption of desired signals by stray pick-up of electrical signals in the interconnections between the monolithic integrated circuits can be remedied if the desired signals are digital in nature.
In neural networks using digital correlators of this new type, the non-linear circuitry exhibiting a sigmoidal response used after each digital correlator has to be digital in nature, rather than being analog in nature as in previous neural networks. Neural nets employing capacitors in accordance with the invention lend themselves to being used in performing parts of the computations needed to implement a back-propagation training algorithm. The determination of the slope of the non-linear transfer function, which determination is necessary when training a neural net layer using a back-propagation training algorithm, can be simply accomplished in certain digital non-linear circuitry as will be described further on in this specification. This contrasts with the greater difficulty of determining the slope of the non-linear transfer function in analog non-linear circuitry.
The back-propagation training algorithm is an iterative gradient algorithm designed to minimize the mean square error between the actual output of a multi-layer feed-forward neural net and the desired output. It requires continuous, differentiable non-linearities in the transfer function of the non-linear circuitry used after the weighted summation circuits in the neural net layer. A recursive algorithm starting at the output nodes and working back to the first hidden layer is used iteratively to adjust weights in accordance with the following formula.
W.sub.i,j (t+1)=W.sub.i,j (t)-.eta..delta..sub.j x.sub.i ( 2)
In this equation W.sub.i,j (t) is the weight from hidden node i (or, in the case of the first hidden layer, from an input node) to node j at time t; x.sub.i is either the output of node i (or, in the case of the first hidden layer, is an input signal); .eta. is a gain term introduced to maintain stability in the feedback procedure used to minimize the mean square errors between the actual output(s) of the perceptron and its desired output(s); and .delta..sub.j is a derivative of error. The general definition of .delta..sub.j is the change in error energy from output node j of a neural net layer with a change in the weighted summation of the input signals used to supply that output node j.
Lippman presumes that a particular sigmoid logistic non-linearity is used. Presuming the non-linearity of processor response is to be defined not as restrictively as Lippmann does, then .delta..sub.j can be more particularly defined as in equation (2), following, if node j is an output node, or as in equation (3), following, if node j is an internal hidden node.
.delta..sub.j =z.sub.j ' (d.sub.j -z.sub.j) (3) ##EQU5## In equation (3) d.sub.j and z.sub.j are the desired and actual values of output response from the output layer and z.sub.j ' is differential response of z.sub.j to the non-linearity in the output layer--i.e., the slope of the transfer function of that non-linearity. In equation (4) k is over all nodes in the neural net layer succeeding the hidden node j under consideration and W.sub.j,k is the weight between node j and each such node k. The term z.sub.j ' is defined in the same way as in equation (3).
The general definition of the z.sub.j ' term appearing in equations (3) and (4), rather than that general term being replaced by the specific value of z.sub.j ' associated with a sigmoid logistic non-linearity, is the primary difference between the training algorithm as described here and as described by Lippmann. Also, Lippmann defines .delta..sub.j in opposite polarity from equations (1), (3) and (4) above.
During training of the neural net, prescribed patterns of input signals are sequentially repetitively applied, for which patterns of input signals there are corresponding prescribed patterns of output signals known. The pattern of output signals generated by the neural net, responsive to each prescribed pattern of input signals, is compared to the prescribed pattern of output signals to develop error signals, which are used to adjust the weights per equation (2) as the pattern of input signals is repeated several times, or until the error signals are detected as being negibly valued. Then training is done with the next set of patterns in the sequence. During extensive training the sequence of patterns may be recycled.
SUMMARY OF THE INVENTION
Plural-bit digital input signals to be subjected to weighted summation in a digital correlator are bit-sliced; and a number N of respective first through N.sup.th weighted summations of the bits of the digital input signals in each bit slice are performed, resulting in a respective set of analog first through N.sup.th partial weighted summation results. Each weighted summation of a bit slice of the digital input signals is performed using a capacitive network that generates partial weighted summation results in the analog regime, the capacitive network comprising a set of weighting capacitors defining a function for the plural-bit digital input signals to be correlated with. This capacitive network can be one of the types taught for use with analog input signals in U.S. Pat. Nos. 5,039,870 and 5,039,871, for example. In accordance with an aspect of the invention, the analog first through N.sup.th partial weighted summation results are digitized by analog-to-digital conversion circuitry, then weighted summations of the digitized partial weighted summation results of similar ordinal number are performed in digital circuitry to generate digital first through N.sup.th final weighted summation results.
In accordance with a further aspect of the invention, in a processor for inclusion in a neural net, the final weighted summation result is non-linearly processed in digital circuitry with sigmoidal response to generate a digital axonal output response. A further aspect of the invention is digital circuitry with sigmoidal response.





BRIEF DESCRIPTION OF THE DRAWING
FIG. 1 is a schematic diagram of a prior-art single-ended charge sensing amplifier comprising a Miller integrator with resetting circuitry that compensates for input offset error in the differential-input operational amplifier the Miller feedback capacitor provides degenerative feedback to; this type of single-ended charge sensing amplifier and a balanced version thereof are preferred over simpler Miller integrators for the charge sensing amplifiers used in the neural nets of the invention.
FIGS. 2, 3 and 4 are schematic diagrams of three different apparatuses, each for performing a plurality of weighted summation procedures in parallel on a bit slice of input signals to generate analog partial weighted summation results sequentially, as originally described by the inventor in U.S. patent application Ser. No. 561,404.
FIG. 5 is a schematic diagram of apparatus for digitizing analog partial weighted summation results as sequentially supplied from one of the FIG. 2, FIG. 3 and FIG. 4 apparatuses and then digitally performing a plurality of weighted summation procedures each of which weighted summation procedures weights and sums a respective succession of digitized sequentially supplied data in accordance with an aspect of the invention.
FIGS. 6A and 6B, arranged respectively on the left and on the right of the other, together form a schematic diagram (generally referred to as FIG. 6) of a neural net layer including any of the FIG. 2, FIG. 3 and FIG. 4 apparatuses for performing a plurality of partial weighted summation procedures; an array of analog-to-digital converters for digitizing the partial weighted summation results; possibly a FIG. 5 apparatus for digitally performing a plurality of digital final weighted summation procedures; and an array of digital non-linear processors generating digital axonal responses to the digital final weighted summation results, each of which alternative neural net layers embodies the invention in a respective one of its aspects.
FIG. 7 is a schematic diagram of a digital non-linear processor that can be used in neural net layers embodying the invention.
FIG. 8 is a graph showing the non-linear transfer function of the FIG. 7 digital non-linear processor.
FIGS. 9 and 10 are schematic diagrams of two further apparatuses, each for performing a plurality of weighted summation procedures in parallel on a bit slice of input signals, as originally described by the inventor in U.S. patent application Ser. No. 561,404.
FIG. 11 is a schematic diagram of apparatus for digitizing analog partial weighted summation results as sequentially supplied from one of the FIG. 9 and FIG. 10 apparatuses and then digitally performing a plurality of weighted summation procedures, each of which weighted summation procedures weights and sums a respective succession of digitized sequentially supplied data in accordance with an aspect of the invention.
FIGS. 12A and 12B, arranged respectively on the left and on the right of the other, together form a schematic diagram (generally referred to as FIG. 12) of a neural net layer including two ranks of either of the FIG. 9 and FIG. 10 apparatuses for performing a plurality of partial weighted summation procedures; two ranks of analog-to-digital converters for digitizing the partial weighted summation results; possibly two ranks of FIG. 11 apparatus for digitally performing a plurality of digital final weighted summation procedures; and an array of digital non-linear processors generating digital axonal responses to the digital final weighted summation results, each of which alternative neural net layers embodies the invention in a respective one of its aspects.
FIGS. 13A and 13B, arranged respectively on the left and on the right of the other, together form a FIG. 13 which is a schematic diagram of a modification that can be made manifold times to the neural net shown in FIGS. 2, 5 and 6; FIGS. 3, 5 and 6; FIGS. 9, 11 and 12; or FIGS. 10, 11 and 12 for the programmable weighting of the capacitances used in performing weighted summations of synapse signals.
FIGS. 14A and 14B, arranged respectively on the left and on the right of the other, together form a FIG. 14 which is a schematic diagram of a modification that can be made manifold times to the neural net shown in FIGS. 4, 5 and 6, for the programmable weighting of the capacitances used in performing weighted summations of synapse signals.
FIGS. 15A and 15B, arranged respectively on the left and on the right of the other, together form a FIG. 15 which is a schematic diagram of a modification that can be made manifold times to the neural net shown in FIGS. 9, 11 and 12 or in FIGS. 10, 11 and 12, for the programmable weighting of the capacitances used in performing weighted summations of synapse signals.
FIGS. 16A and 16B, arranged respectively on the left and on the right of the other, together form a FIG. 16 that is a schematic diagram of a neural net layer in which both forward propagation and back propagation through a capacitive network are carried out with balanced signals, in which digital capacitors have their capacitance values programmed from respective word storage locations within a digital memory, and in which the signs of the digital signals being weighted and summed are taken into account in the final weighted summation procedure performed on a digital basis.
FIGS. 17A and 17B, arranged respectively on the left and on the right of the other, together form a FIG. 17 that is a schematic diagram of an alternative neural net layer in which both forward propagation and back propagation through a capacitive network are carried out with balanced signals, and in which digital capacitors have their capacitance values programmed from respective word storage locations within a digital memory. In the FIG. 17 neural net layer the signs of the digital signals being weighted and summed are taken into account in the partial weighted summation procedure performed on an analog basis.
FIG. 18 is a schematic diagram of a back-propagation processor used for training a neural net having layers as shown in FIGS. 2, 5 and 6 or in FIGS. 3, 5 and 6 modified manifoldly per FIG. 13; or a neural net having layers as shown in FIGS. 4, 5 and 6 modified manifoldly per FIG. 14; or a neural net having layers as shown in FIGS. 9, 11 and 12 or in FIGS. 10, 11 and 12 modified manifoldly per FIG. 15; or a neural net having layers as shown in FIG. 16; or or a neural net having layers as shown in FIG. 17.
FIG. 19 is a schematic diagram of a system having a plurality of neural net layers as shown in FIGS. 2, 5 and 6 or in FIGS. 3, 5 and 6 modified manifoldly per FIG. 13; or in FIGS. 4, 5 and 6 modified manifoldly per FIG. 14; or in FIGS. 9, 11 and 12 or in FIGS. 10, 11 and 12 modified manifoldly per FIG. 15; or in FIG. 16; or in FIG. 17 each of which neural net layers has a respective back-propagation processor per FIG. 16 associated therewith.
FIGS. 20A and 20B, arranged respectively on the left and on the right of the other, together form a FIG. 20 that is a schematic diagram of an alternative modification of that can be made manifold times to the neural net shown in FIGS. 2, 5 and 6 or in FIGS. 3, 5 and 6, for the programmable weighting of the capacitances used in performing weighted summations of synapse signals.
FIGS. 21A and 21B, arranged respectively on the left and on the right of the other, together form a FIG. 21 that is a schematic diagram of an alternative modification of that can be made manifold times to the neural net shown in FIGS. 4, 5 and 6, for the programmable weighting of the capacitances used in performing weighted summations of synapse signals.
FIGS. 22A and 22B, arranged respectively on the left and on the right of the other, together form a FIG. 22 that is a schematic diagram of an alternative modification of that can be made manifold times to the neural net shown in FIGS. 9, 11 and 12 or in FIGS. 10, 11 and 12, for the programmable weighting of the capacitances used in performing weighted summations of synapse signals.
FIGS. 23A and 23B, arranged respectively on the left and on the right of the other, together form a FIG. 23 that is a schematic diagram of a neural net layer in which both forward propagation and back propagation through a capacitive network are carried out with balanced signals, in which digital capacitors have their capacitance values programmed from counters, and in which the signs of the digital signals being weighted and summed are taken into account in the final weighted summation procedure performed on a digital basis.
FIGS. 24A and 24B, arranged respectively on the left and on the right of the other, together form a FIG. 24 that is a schematic diagram of an alternative neural net layer in which both forward propagation and back propagation through a capacitive network are carried out with balanced signals, and in which digital capacitors have their capacitance values programmed from counters. In the FIG. 24 neural net layer the signs of the digital signals being weighted and summed are taken into account in the partial weighted summation procedure performed on an analog basis.
FIG. 25 is a schematic diagram of the arrangement of stages in the counters shown in FIGS. 20-24.
FIG. 26 is a schematic diagram of the logic elements included in each counter stage in the counters shown in FIGS. 20-25.





DETAILED DESCRIPTION
FIG. 1 shows a single-ended charge sensing amplifier QS.sub.k of a preferable type for implementing the single-ended charge sensing amplifiers used in modifications of the FIG. 9 and 10 circuitry. The charge sensing amplifier QS.sub.K is essentially a Miller integrator that includes a differential-input operational amplifier OA having a Miller feedback capacitor MC connecting from its output connection OUT back to its inverting (-) input connection during normal operation. The non-inverting (+) input connection of the operational amplifier OA is connected to a fixed potential in the single-ended charge sensing amplifier QS.sub.k, which fixed potential is shown as having a value (V.sub.SS +V.sub.DD)/2 midway between a relatively high potential (V.sub.DD) and a relatively low potential (V.sub.SS). During normal charge-sensing operation a relatively low potential (V.sub.SS) is applied via RESET terminal to a logic inverter INV that responds to apply a relatively high potential (V.sub.DD) to a transmission gate TG1. The transmission gate TG1 is rendered conductive to connect the output connection OUT of operational amplifier OA to capacitor MC to complete its connection as Miller feedback capacitor. The relatively low potential applied via RESET terminal conditions a transmission gate TG2 and a transmission gate TG3 both to be non-conductive. QS.sub.K is a preferred charge sensing amplifier because differential input offset error in the operational amplifier OA is compensated against, owing to the way the Miller integrator is reset.
During periodic reset intervals for the integrator a relatively high potential (V.sub.DD) is applied via RESET terminal to condition transmission gates TG2 and TG3 each to be conductive and to condition the logic inverter INV output potential to go low, which renders transmission gate TG1 non-conductive. The conduction of transmission gate TG2 connects the output connection OUT of operational amplifier OA directly to its inverting (-) input connection, completing a feedback connection that forces the inverting (-) input connection to the differential input offset error voltage, which voltage by reason of transmission gate TG3 being conductive is stored on the Miller capacitor MC. When normal charge-sensing operation is restored by RESET terminal going low, the differential input offset error bias remains stored on the Miller capacitor MC, compensating against its effect upon charge sensing.
Supposing the operational amplifier OA to be a differential output type having balanced output connections OUT and OUTBAR, a balanced version of the charge sensing amplifier QS.sub.K can be formed by disconnecting the non-inverting (+) input connection of the operational amplifier OA from a point of fixed potential having a value (V.sub.SS +V.sub.DD)/2. Instead, the non-inverting (+) input connection of the operational amplifier OA is arranged to have a feedback connection from the OUTBAR output connection of the operational amplifier OA similar to the feedback connection from the OUT output connection of the operational amplifier OA to its inverting (-) input connection. This balanced version of the charge sensing amplifier QS.sub.K is shown in FIGS. 2-5, 9-11, 13-17 and 20-24.
M is a positive plural integer indicating the number of input signals to the FIG. 2, 3, 4, 9 or 10 weighted summation apparatus, and N is a positive plural integer indicating the number of output signals the FIG. 2, 3, 4, 9 or 10 apparatus can generate. To reduce the written material required to describe operation of the weighted summation apparatuses in FIGS. 2, 3, 4, 9 and 10 of the drawing, operations using replicated elements will be described in general terms; using a subscript i ranging over all values one through M for describing operations and circuit elements as they relate to the (column) input signals x.sub.1, x.sub.2, . . . x.sub.M ; and using a subscript j ranging over all values one through N for describing operations and apparatus as they relate to the (row) output signals y.sub.1, y.sub.2, . . . y.sub.N. That is, i and j are the column and row numbers used to describe particular portions of the FIGS. 2, 3, 4, 9 and 10 weighted summation apparatuses and modifications of those apparatuses.
The FIG. 2 apparatus performs a plurality of weighted summation procedures in parallel on each successive bit slice of input signals, which input signals comprise a plurality M in number of parallel bit streams x.sub.1, x.sub.2, . . . x.sub.M. This apparatus is assumed to receive a first operating voltage V.sub.DD, a second operating voltage V.sub.SS, and a third operating voltage (V.sub.SS +V.sub.DD)/2 midway between V.sub.SS and V.sub.DD. V.sub.DD and V.sub.SS are presumed to be relatively positive and relatively negative respective to each other.
Each input voltage signal x.sub.i is applied as control signal to a respective multiplexer MX.sub.i and to a respective multiplexer MX.sub.(M+i). Multiplexer MX.sub.i responds to x.sub.i being a ONE to apply the V.sub.DD first operating voltage to an input (column) line IL.sub.i and responds to x.sub.i being a ZERO to apply the V.sub.SS second operating voltage to the input line IL.sub.i. Multiplexer MX.sub.(M+i) responds to x.sub.i being a ONE to apply the V.sub.SS second operating voltage to an input (column) line IL.sub.(M+i) and responds to x.sub.i being a ZERO to apply the V.sub.DD first operating voltage to the input line IL.sub.(M+i).
A capacitor C.sub.i,j has a first plate connected to the input line IL.sub.i and has a second plate connected to an output (row) line OL.sub.j. A capacitor C.sub.(M+i),j has a first plate connected to the input line IL.sub.(M+i) and has a second plate connected to the output line OL.sub.j. Capacitor C.sub.i,j and capacitor C.sub.(M+i),j together are considered as a pair, for providing in effect a single weighting capacitor, the capacitance of which is the difference in the capacitances of capacitor C.sub.i,j and capacitor C.sub.(M+i),j between their respective first and second plates. These weighting capacitors may be considered to be arrayed by row and by column. The charge placed on the output line OL.sub.j by all the weighting capacitors connecting thereto is sensed on a single-ended basis by a respective charge-sensing amplifier RQS.sub.j. Each of the charge-sensing amplifiers RQS.sub.j is shown as a respective Miller integrator comprising an operational amplifier and Miller feedback capacitors.
During reset or zeroing of all the charge-sensing amplifiers RQS.sub.j, each of the x.sub.i input voltages is a logic ZERO. This applies V.sub.SS to the plates of capacitors C.sub.i,j connected from the multiplexers MX.sub.i and applies V.sub.DD to the plates of capacitors C.sub.(M+i),j connected from the multiplexers MX.sub.(M+i). The total capacitance on each output line OL.sub.j is caused to be the same as on each of the other output lines by a respective shunt capacitor C.sub.0,j to signal ground, which capacitor either has capacitance that is so large as to overwhelm the other capacitances on the output line OL.sub.j or preferably that complements the other capacitances on the output line OL.sub.j. Causing the total capacitance on each output line OL.sub.j to be the same as on each of the other output lines makes the sensitivities of the charge-sensing amplifiers RQS.sub.j to their respective inputs uniform, presuming them to be Miller integrators of identical design. If the capacitances of capacitor C.sub.i,j and capacitor C.sub.(M+i),j between their respective first and second plates sum to a prescribed standard value, for the complete selection range of i and j, the sensitivities of the charge-sensing amplifiers RQS.sub.j to their respective inputs are uniform without need for a respective shunt capacitor C.sub.0,j to signal ground for each output line OL.sub.j, presuming the charge-sensing amplifiers RQS.sub.j to be Miller integrators of identical design.
After reset or zeroing, when x.sub.i bits for different i may each be ZERO or ONE, each x.sub.i bit that is a ZERO creates no change in charge condition on any of the output lines OL.sub.j. A bit x.sub.i that is a ONE creates an incremental change in charge on an output line OL.sub.j that, in accordance with Coulomb's Law, is equal to (V.sub.SS -V.sub.DD)/2 times the difference in the capacitances of capacitors C.sub.i,j and C.sub.(M+i),j between their respective first and second plates. The sum of these incremental charges accumulates on the output line OL.sub.j and is sensed by the charge-sensing amplifier RQS.sub.j.
FIG. 3 shows a modification of the FIG. 2 apparatus in which the multiplexer MX.sub.i and the multiplexer MX.sub.(M+i) respond to x.sub.i being a ZERO to apply the third operating voltage (V.sub.SS +V.sub.DD)/2 to the input line IL.sub.i and to the input line IL.sub.(M+i), respectively. During reset, the weighting capacitors C.sub.i,j and C.sub.(M+i) will be charged to relatively small bias voltages between their plates, rather than to bias voltages of amplitudes close to (V.sub.SS +V.sub.DD)/2. The FIG. 2 apparatus is advantageous over the FIG. 3 apparatus in that, in the FIG. 2 apparatus, accuracy of the the third operating voltage (V.sub.SS +V.sub.DD)/2 being exactly midway between the first operating voltage V.sub.DD and the second operating voltage V.sub.SS is not necessary for accuracy of the partial weighted summation results.
The FIG. 4 apparatus also performs a plurality of weighted summation procedures in parallel on each successive bit slice of input signals, which input signals comprise a plurality M in number of parallel bit streams x.sub.1, x.sub.2, x.sub.3, . . . x.sub.M. Logic inverters INV.sub.1, INV.sub.2, INV.sub.3, . . . INV.sub.M respond to the current bits x.sub.1, x.sub.2, x.sub.3, . . . x.sub.M respectively with their respective logic complements. (The current bits x.sub.1, x.sub.2, x.sub.3, . . . x.sub.M are assumed to be supplied in accordance with the positive logic convention.) The FIG. 4 apparatus also is assumed to receive a relatively positive first operating voltage V.sub.DD, a relatively negative second operating voltage V.sub.SS, and a third operating voltage (V.sub.SS +V.sub.DD)/2 midway between V.sub.SS and V.sub.DD.
The logic inverter INV.sub.i responds to x.sub.1 being a ZERO to apply V.sub.DD to an input line IL.sub.i and to x.sub.1 being a ONE to apply V.sub.SS to the input line IL.sub.i. As in FIGS. 2 and 3, the charge-sensing amplifier RQS.sub.j is one of a plurality, N in number, of identical charge-sensing amplifiers for sensing the difference in charges accumulated on a respective pair of output lines. In FIG. 4 the charge-sensing amplifier RQS.sub.j is arranged for differentially sensing charge and is connected to sense the difference in charges accumulated on output lines OL.sub.j and OL.sub.(N+j). The output lines OL.sub.j and OL.sub.(N+j) are charged from each input line input line IL.sub.i via a capacitor C.sub.i,j and via a capacitor C.sub.i,(N+j), respectively. Capacitor C.sub.i,j and capacitor C.sub.i,(N+j) together are considered as a pair, for providing in effect a single weighting capacitor, the capacitance of which is the difference in the capacitances of capacitor C.sub.i,j and capacitor C.sub.i,(N+j) between their respective first and second plates. The total capacitance on each output line OL.sub.j is maintained the same as on each of the other output lines by a respective shunt capacitor C.sub.0,j to signal ground; and the total capacitance on each output line OL.sub.(N+j) is maintained the same as on each of the other output lines by a respective shunt capacitor C.sub.0,(N+j) to signal ground.
Where the capacitances of capacitor C.sub.i,j and capacitor C.sub.i,(N+j) between their respective first and second plates are to be alterable responsive to digital programming signals, it is preferred that the capacitances of capacitor C.sub.i,j and capacitor C.sub.i,(N+j) between their respective first and second plates sum to a prescribed standard value for the complete selection range of i and j. It is further preferred that each pair of capacitors C.sub.i,j and C.sub.i,(N+j) have a corresponding further pair of capacitors C.sub.(M+i),j and C.sub.(M+i),(N+j) associated therewith, capacitor C.sub.(M+i),j having a capacitance equal to that of capacitor C.sub.i,(N+j) and connecting output line OL.sub.j to a point of connection P.sub.i,j, and capacitor C.sub.(M+i),(N+j) having a capacitance equal to that of capacitor C.sub.i,j and connecting output line OL.sub.(N+j) to the same point of connection P.sub.i,j. If all the points of connection P.sub.i,j connect to signal ground, the capacitors C.sub.(M+i),j for all values of i together provide for each value of j the respective shunt capacitor C.sub.0,j to signal ground, and the capacitors C.sub.(M+i),(N+j) for all values of i together provide for each value of j the respective shunt capacitor C.sub.0,(N+j) to signal ground. This is taught in greater detail in U.S. Pat. No. 5,039,871.
The FIG. 4 apparatus may be modified to replace logic inverters INV.sub.1, INV.sub.2, INV.sub.3, . . . INV.sub.M with non-inverting driver amplifiers. In such case the other output connections of the differential-output operational amplifiers in the charge sensing amplifiers RQS.sub.1, RQS.sub.2, RQS.sub.3, . . . RQS.sub.N are used to supply the y.sub.1, y.sub.2, y.sub.3, . . . y.sub.N partial weighted summations.
The FIG. 4 apparatus may alternatively be modified to augment logic inverters INV.sub.1, INV.sub.2, INV.sub.3, . . . INV.sub.M with non-inverting driver amplifiers DA.sub.1, DA.sub.2, DA.sub.3, . . . DA.sub.M respectively and to use each non-inverting driver amplifier DA.sub.i to drive the points of connection P.sub.i,j for all values of j. This provides for full-capacitor bridge drives to each charge-sensing amplifier RQS.sub.j, rather than half-capacitor bridge drives. The advantage of doing this is that the common-mode voltage on the output (row) lines OL.sub.j and OL.sub.(N+j) is zero, so one does not have to rely as much on the common-mode suppression of the charge-sensing amplifier RQS.sub.j to keep the integration of charge within the operating supply range of that amplifier.
FIGS. 6A and 6B when arranged in FIG. 6 provide a schematic diagram of a neural net layer wherein the weighting capacitors are constructed on a binary-digit-sliced basis as described in U.S. patent application Ser. No. 525,931, filed May 21, 1990 by W. E. Engeler, entitled "WEIGHTED SUMMATION CIRCUITS HAVING DIFFERENT-WEIGHT RANKS OF CAPACITIVE STRUCTURES" and assigned to General Electric Company. In FIG. 6A the bit-sliced (column) input signals x.sub.1, x.sub.2, . . . x.sub.M are applied to a weighted summation network WSN.sub.1 that comprises a rank of capacitive weighting structures of relatively high significance; and in FIG. 6B the bit-sliced (column) input signals x.sub.1, x.sub.2, . . . x.sub.M are applied to a weighted summation network WSN.sub.3 that comprises a rank of capacitive weighting structures of relatively low significance. In FIG. 6A the analog weighted sums of the bit-sliced input signals x.sub.1, x.sub.2, . . . x.sub.M supplied from the weighted summation network WSN.sub.1, which are of relatively higher significances, are digitized in respective analog-to-digital (or A-to-D) converters RADC.sub.1, RADC.sub.2, RADC.sub.3, . . . RADC.sub.N to generate bit-sliced digital signals y.sub.1, y.sub.2, y.sub.3, . . . y.sub.N, respectively. Analogously, in FIG. 6B the analog weighted sums of the bit-sliced input signals x.sub.1, x.sub.2, . . . x.sub.M supplied from the weighted summation network WSN.sub.3, which are of relatively lower significances, are digitized in respective analog-to-digital (or A-to-D) converters RADC.sub.(N+1), RADC.sub.(N+2), RADC.sub.(N+3), . . . RADC.sub.2N to generate bit-sliced digital signals y.sub.(N+1), y.sub.(N+2), y.sub.(N+3), . . . y.sub.2N, respectively.
In FIG. 6A the bit-sliced digital signals y.sub.1, y.sub.2, y.sub.3, . . . y.sub.N, which are partial weighted summations, are applied to a weighted summation network WSN.sub.2 (as will be described in greater detail further on with reference to FIG. 5). The weighted summation network WSN.sub.2 combines each set of partial weighted summations y.sub.j to generate a respective relatively more significant component .SIGMA.y.sub.j of the final weighted summation supplied as the sum signal output of a digital adder ADD.sub.j, which respective relatively more significant component .SIGMA.y.sub.j is applied as augend input signal to the adder ADD.sub.j. Analogously, in FIG. 6B the bit-sliced digital signals y.sub.(N+1), y.sub.(N+2), y.sub.(N+3), . . . y.sub.2N, which are partial weighted summations, are applied to a weighted summation network WSN.sub.4 (as will be described in greater detail further on with reference to FIG. 5). The weighted summation network WSN.sub.4 combines each set of partial weighted summations y.sub.(N+j) to generate a respective relatively less significant component .SIGMA.y.sub.(N+j) of the final weighted summation supplied as the sum signal output of a digital adder ADD.sub.j, which respective relatively less significant component .SIGMA.y.sub.(N+j) is applied after appropriate wired shift right WSR.sub.j as addend input signal to the adder ADD.sub. j. (Right shift of a digital signal by P binary places is reckoned as being a division of the digital signal by 2.sup.P.)
The sum signals from the adders ADD.sub.1, ADD.sub.2, ADD.sub.3, . . . ADD.sub.N are supplied as input signals to respective digital non-linear processors NLP.sub.1, NLP.sub.2, NLP.sub.3, . . . NLP.sub.N which respond to their respective input signals to generate a plurality, N in number, of digital axonal output responses. As shown in FIG. 6A, these digital axonal output responses can be applied to succeeding circuitry NEXT. For example, this succeeding circuitry NEXT may be an output neural net layer where the neural net layer thusfar described is a hidden neural net layer.
Where the input signals to the FIG. 2, FIG. 3 or FIG. 4 weighted summation apparatus comprise single-bit words, the N signals y.sub.1, y.sub.2, y.sub.3, . . . y.sub.N can per FIG. 6A, be digitized in respective analog-to-digital (or A-to-D) converters RADC.sub.1, RADC.sub.2, RADC.sub.3, . . . RADC.sub.N, for application to adders ADD.sub.1, ADD.sub.2, ADD.sub.3, . . . ADD.sub.N, which supply their sum signals to respective digital non-linear processors NLP.sub.1, NLP.sub.2, NLP.sub.3, . . . NLP.sub.N to generate a plurality, N in number, of digital axonal output responses. There is no need for the further, digital weighted summation apparatus WSN.sub.2. Analogously, the N signals y.sub.(N+1), y.sub.(N+2), y.sub.(N+3), . . . y.sub.2N can, per FIG. 6B, be digitized in respective analog-to-digital (or A-to-D) converters RADC.sub.(N+1), RADC.sub.(N+2), RADC.sub.(N+3), . . . RADC.sub.2N, for application with appropriate wired shifts right WSR.sub.1, WSR.sub.2, WSR.sub.3, . . . WSR.sub.N to adders ADD.sub.1, ADD.sub.2, ADD.sub.3, . . . ADD.sub.N. There is no need for the further, digital weighted summation apparatus WSN.sub.4.
Where only a single rank of capacitive weighting structures--that rank in the weighted summation network WSN.sub.1 --is used, all the elements shown in FIG. 6B are dispensed with. The adders ADD.sub.1, ADD.sub.2, ADD.sub.3, . . . ADD.sub.N are supplied zero-valued addend signals rather than being supplied .SIGMA.y.sub.(N+1), .SIGMA.y.sub.(N+2), .SIGMA.y.sub.(N+3), . . . .SIGMA.y.sub.2N as respective addend signals. Alternatively, the adders ADD.sub.1, ADD.sub.2, ADD.sub.3, . . . ADD.sub.N may be omitted, with the digital non-linear processors NLP.sub.1, NLP.sub.2, NLP.sub.3, . . . NLP.sub.N receiving their respective input signals directly from the weighted summation network WSN.sub.2. Or directly from the A-to-D converters RADC.sub.(N+1), RADC.sub.(N+2), RADC.sub.(N+3), . . . RADC.sub.2N where the input signals x.sub.1, x.sub.2, . . . x.sub.M to the weighted summation network WSN.sub.1 comprise single-bit words so the weighted summation network WSN.sub.1 is not used.
FIG. 5 shows further, digital weighted summation apparatus that can be used with the analog weighted summation apparatus of FIG. 2, 3 or 4 to provide for the weighted summation of digital input signals x.sub.1, x.sub.2, . . . x.sub.M having plural-bit words. The digital signals are placed in bit-serial form and in word alignment for application to the weighted summation apparatus of FIG. 2, 3 or 4, which processes the bit-serial digital signals on a bit-slice basis to generate B partial weighted summation results, B being the number of bits per bit-serial arithmetic word. The successive bit slices of the plural-bit words of the digital input signals x.sub.1, x.sub.2, . . . x.sub.M are processed in order of increasing significance, with the sign bit slice being processed last in each word. A SIGN BIT FLAG that is normally ZERO is provided, which SIGN BIT FLAG goes to ONE to signal the parallel occurrence of the sign bits in each successive group of M input signal words at the time the partial weighted summation of greatest significance is being generated, supposing the digital signals to be flows of two's complement arithmetic words.
FIG. 5 shows the plurality N in number of A-to-D converters RADC.sub.1, RADC.sub.2. . . RADC.sub.N used to convert the analog y.sub.1, y.sub.2, . . . y.sub.N partial weighted summations from the charge sensing amplifiers RQS.sub.1, RQS.sub.2, RQS.sub.3, . . . RQS.sub.N to digital format and a weighted summation network for combining the sequences of digitized y.sub.1, y.sub.2, . . . y.sub.N partial weighted summations to generate final weighted summations .SIGMA.y.sub.1, .SIGMA.y.sub.2, . . . .SIGMA.y.sub.N. More particularly, the digitized y.sub.1, y.sub.2, . . . y.sub.N partial weighted summations from the FIG. 5 A-to-D converters RADC.sub.1, RADC.sub.2 . . . RADC.sub.N are assumed to be respective two's complement numbers. The bit resolution afforded by the A-to-D converters RADC.sub.1, RADC.sub.2 . . . RADC.sub.N preferably increases by one bit place each successive bit slice of the digital word; in any case, each successive partial weighted summation within the same digital word is weighted by an additional factor of two respective to the previous partial weighted summation. A plurality, N in number, of accumulators RACC.sub.1, RACC.sub.2, . . . RACC.sub.N are used for generating respective ones of N respective final weighted summation results .SIGMA.y.sub.j. Each of the successive partial weighted summation results y.sub.j sequentially supplied by the A-to-D converter RADC.sub.i is routed through a respective selective complementor RCMP.sub.i. The selective complementor RCMP.sub.i complements each bit of the partial weighted summation result the A-to-D converter RADC.sub.i generates when the SIGN BIT FLAG is ONE, then augments the result of this one's complementing procedure by adding to it a number that has a ONE as its least significant bit and has more significant bits all of which are ZEROs. This augmentation is simply carried out by applying the SIGN BIT FLAG as a carry to the adder in the accumulator RACC.sub.i. When the SIGN BIT FLAG is ZERO, the selective complementor RCMP.sub.i transmits to the accumulator RACC.sub.i, without complementation, each bit of the partial weighted summation result the A-to-D converter RADC.sub.i generates. Each of these accumulators RACC.sub.1, RACC.sub.2, . . . RACC.sub.N accumulates for each word the successive partial weighted summation results y.sub.j sequentially supplied as one of the output signals of the weighted summation apparatus of FIG. 2, 3 or 4, to generate a final weighted summation result .SIGMA.y.sub.j. (In alternative embodiments of the invention where the bit-serial digital input signals supplying the parallel bit streams x.sub.1, x.sub.2, x.sub.3, . . . x.sub.M are invariably of one polarity and are processed as unsigned numbers, of course, the selective complementors RCMP.sub.1, RCMP.sub.2, . . . RCMP.sub.N need not be included in the respective connections of the A-to-D converters RADC.sub.1, RADC.sub.2 . . . RADC.sub.N to the accumulators RACC.sub.1, RACC.sub.2, . . . RACC.sub.N.)
The respective .SIGMA.y.sub.j final weighted summation result generated by each accumulator RACC.sub.i is latched by a respective clocked latch RLCH.sub.i to be temporarily stored until such time as the next final weighted summation result .SIGMA.y.sub.j is generated by the accumulator RACC.sub.i. The respective .SIGMA.y.sub.j final weighted summation result from latch RLCH.sub.i is supplied as augend input signal to a respective digital adder ADD.sub.j, which may receive as an addend a final weighted summation result .SIGMA.y.sub.j ' of lesser significance from a further, less significant rank of capacitors for weighting the plurality M in number of parallel bit streams x.sub.1, x.sub.2, . . . x.sub.M, which final weighted summation result .SIGMA.y.sub.j ' of lesser significance is appropriately attenuated respective to .SIGMA.y.sub.j by a wired shift right WSR.sub.i. The sum output signal z.sub.i from the adder ADD.sub.i is supplied to the ensuing digital non-linear processor NLP.sub.i ; and, as shown in FIG. 6A, the non-linear response from the digital non-linear processor NLP.sub.i is applied to the succeeding circuitry NEXT.
Where the input signals to the FIG. 2, FIG. 3 or FIG. 4 weighted summation apparatus comprise plural-bit words, the N signals z.sub.1, z.sub.2, . . . z.sub.N from a succeeding FIG. 5 weighted summation apparatus can then, per FIG. 6, be applied to an array of respective digital non-linear processors NLP.sub.1, NLP.sub.2, . . . NLP.sub.N to generate a plurality, N in number, of digital axonal output responses for application to the succeeding circuitry NEXT. For example, this succeeding circuitry may be an output neural net layer where the neural net layer thusfar described is a hidden neural net layer. FIG. 7 shows circuitry that can be used to implement a digital non-linear processor NLP.sub.j as may be used for each of the digital non-linear processors NLP.sub.1, NLP.sub.2, NLP.sub.3, . . . NLP.sub.N of FIG. 6. The FIG. 7 digital non-linear processor is used to obtain a transfer function as diagrammed in FIG. 8. The FIG. 8 transfer function approximates a sigmoidal symmetrically non-linear response by five straight line segments 11, 13, 15, 17 and 19. One skilled in the art of digital circuit design will after being acquainted with the following description of the FIG. 7 digital non-linear processor be able to design more complex digital non-linear processors approximating a sigmoidal transfer characteristic using an odd number greater than five of straight line segments.
In the FIG. 8 graph of the transfer function of the FIG. 7 digital non-linear processor, the straight line segment 15 passes through zero input signal, extending between inflection points 14 and 16 which occur at input signals of values -k.sub.1 and k.sub.1, respectively. The input signal has relatively small absolute magnitudes in the range defined by the straight line segment 15, which segment of the transfer characteristic has the steepest slope and passes through the origin of the transfer characteristic graph.
The straight line segment 11 extending left and down from inflection point 12 and the straight line segment 19 extending right and up from inflection point 18 have the shallowest slope. The input signal has relatively large absolute magnitudes in the ranges defined by the straight line segments 11 and 19. If the straight line segment 11 were extended without change in its direction right and up from inflection point 12 it would cross the input axis at an intercept value of -k.sub.3 ; and if the straight line segment 19 were extended without change in its direction left and down from inflection point 18 it would cross the input axis at an intercept value of +k.sub.3.
The input signal has intermediate absolute magnitudes in the ranges defined by the straight line segments 13 and 17. The straight line segment 13 extends between the inflection points 12 and 14 and, if it were extended without change in its direction beyond inflection point 14, would cross the output signal axis at an intercept value of -k.sub.4. The straight line segment 17 extends between the inflection points 16 and 18 and, if it were extended without change in its direction beyond inflection point 16, would cross the output signal axis at an intercept value of +k.sub.4.
In the FIG. 7 digital non-linear processor the digital final weighted summation result .SIGMA.y.sub.j is supplied to a respective absolute value circuit AVC.sub.j where the most significant, sign bit conditions a selective complementor RCMP.sub.(N+j) to complement the less significant bits of .SIGMA.y.sub.j when z.sub.j is negative. The output signal from the selective complementor RCMP.sub.(N+j) is supplied as augend signal to a digital adder ADD.sub.(N+j), which is supplied the most significant bit of .SIGMA.y.sub.j as an addend signal. The sum output of the adder ADD.sub.(N+j) is the .vertline..SIGMA.y.sub.j .vertline. output signal of the absolute value AVC.sub.j.
A digital subtractor SUB.sub.j is used for subtracting a positive constant value k.sub.1 from .vertline..SIGMA.y.sub.j .vertline. to determine whether .vertline..SIGMA.y.sub.j .vertline. being <k.sub.1 should be on a portion of the non-linear transfer characteristic having steepest slope or whether .vertline..SIGMA.y.sub.j .vertline. being .gtoreq.k.sub.1 should be on a portion of the non-linear transfer characteristic having a shallower slope. If the most significant, sign bit of the difference output signal of the subtractor SUB.sub.i is a ONE, indicating .vertline..SIGMA.y.sub.j .vertline.<k.sub.1, a slope multiplexer SMX.sub.j selects the steepest transfer characteristic slope as the multiplier signal for a digital multiplier MULT.sub.j that is supplied .vertline..SIGMA.y.sub.j .vertline. as multiplicand signal. The digital product is then added in a digital adder ADD.sub.(2N+j) to a zero-value intercept, as selected by an intercept multiplexer .sub.I MX.sub.j responsive to the sign bit of the difference output signal of subtractor SUB.sub.i being a ONE, to generate the non-linear response .vertline.z.sub.j .vertline. to .vertline..SIGMA.y.sub.j .vertline..
A digital subtractor SUB.sub.(N+j) is used for subtracting a positive constant value k.sub.2 from .vertline..SIGMA.y.sub.j .vertline. to determine whether .vertline..SIGMA.y.sub.j .vertline. being .gtoreq.k.sub.2 should be on a portion of the non-linear transfer characteristic having shallowest slope or whether .vertline..SIGMA.y.sub.j .vertline. being <k.sub.2 should be on a portion of the non-linear characteristic having a steeper slope. It is most significant, sign bit of the difference output signal of the subtractor SUB.sub.(N+j) is a ZERO, indicating .vertline..SIGMA.y.sub.j .vertline..gtoreq.k.sub.2, the slope multiplexer SMX.sub.j selects the shallowest transfer characteristic slope as the multiplier signal for the digital multiplier MULT.sub.j ; and the intercept multiplexer .sub.I MX.sub.j selects an intercept of value k.sub.3 somewhat smaller than k.sub.2 to be added to the product from multiplier MULT.sub.j in adder ADD.sub.(N+j) to generate .vertline.z.sub.j .vertline..
Responsive to the most significant bits of the difference signals from the subtractors SUB.sub.j and SUB.sub.(N+j) being ZERO and ONE respectively, indicative that .vertline..SIGMA.y.sub.j .vertline. should be on a portion of the non-linear transfer characteristic have a slope intermediate between the steepest and shallowest slopes, the slope multiplexer SMX.sub.j selects that intermediate slope as multiplier signal to the multiplier MULT.sub.j, and the intercept multiplexer .sub.I MX.sub.j selects an intercept of value k.sub.4 somewhat smaller than k.sub.1 as the addend signal supplied to the adder ADD.sub.(2N+j).
The non-linear response .vertline.z.sub.j .vertline. to .vertline..SIGMA.y.sub.j .vertline. is converted to z.sub.j response to .SIGMA.y.sub.j by processing .vertline.z.sub.j .vertline. through a selective complementor CMP.sub.(2N+j) that complements each bit of .vertline.z.sub.j .vertline. when the most significant, sign bit of .SIGMA.y.sub.j is a ONE indicating its negativity; inserting the most significant bit of .SIGMA.y.sub.j as sign bit before the selective complementor CMP.sub.(2N+j) output signal; and adding the resultant in a digital adder ADD.sub.(3N+j) to the most significant bit of .SIGMA.y.sub.j. The sum signal from the digital ADD.sub.(3N+j) is then used to load a parallel-in/serial-out register RPISO.sub.j for the j.sup.th row, which supplies z.sub.j in bit-serial format to the succeeding circuitry NEXT (per FIG. 6).
During back-propagation training, the value of the slope selected by the slope multiplexer SMX.sub.j responsive to the .vertline..SIGMA.y.sub.j .vertline. generated by a set of prescribed synapse input signals can be temporaily stored in a latch SLCH.sub.j responsive to a LATCH SLOPE command. This operation will be considered as part of an overall description of back-propagation training further on in this specification. In FIGS. 13-16 and 20-24, the slope latch SLCH.sub.j is shown external to the digital non-linear processor NLP.sub.j with which it is associated.
FIG. 9 shows a modification of the FIG. 2 or FIG. 3 apparatus in which the multiplexers MX.sub.1. . . MX.sub.M, MX.sub.(M+1) . . . MX.sub.2M that each select from between two operating voltages the voltage applied to a corresponding one of the input lines IL.sub.1 . . . IL.sub.M, IL.sub.(M+1) . . . IL.sub.2M are replaced by multiplexers MX.sub.1 ' . . . MX.sub.M ', MX.sub.(M+1) ' . . . MX.sub.2M ' that each select from among the V.sub.DD, V.sub.SS and (V.sub.SS +V.sub.DD)/2 operating voltages the voltage applied to a corresponding one of the input lines IL.sub.1 . . . IL.sub.M, IL.sub.(M+1) . . . IL.sub.2M. The current condition of the SIGN BIT FLAG is applied to each of the multiplexers MX.sub.i ' and MX.sub.(M+i) ' as its first control bit, and the current bit of a respective input voltage signal x.sub.i is applied to each of the multiplexers MX.sub.i ' and MX.sub.(M+i) ' as its second control signal.
For all bits of x.sub.i except its sign bits, the SIGN BIT FLAG is a ZERO. The SIGN BIT FLAG being a ZERO conditions multiplexer MX.sub.i ' to respond to x.sub.i being a ONE to apply the V.sub.DD first operating voltage to an input line IL.sub.i and to respond to x.sub.i being a ZERO to apply the third operating voltage (V.sub.SS +V.sub.DD)/2 to the input line IL.sub.i. The SIGN BIT FLAG being a ZERO conditions multiplexer MX.sub.(M+i) ' to respond to x.sub.i being a ONE to apply the V.sub.SS second operating voltage to an input line IL.sub.(M+i) and to respond to x.sub.i being a ZERO to apply the third operating voltage (V.sub.SS +V.sub.DD)/2 to the input line IL.sub.(M+i).
When the sign bits of x.sub.i occur, the SIGN BIT FLAG is a ONE. The SIGN BIT FLAG being a ONE conditions multiplexer MX.sub.i ' to respond to x.sub.i being a ONE to apply the V.sub.SS second operating voltage to an input line IL.sub.i and to respond to x.sub.i being a ZERO to apply the third operating voltage (V.sub.SS +V.sub.DD)/2 to the input line IL.sub.i. The SIGN BIT FLAG being a ONE conditions multiplexer MX.sub.(M+i) ' to respond to x.sub.i being a ONE to apply the V.sub.DD first operating voltage to an input line IL.sub.(M+i) and to respond to x.sub.i being a ZERO to apply the third operating voltage (V.sub.SS +V.sub.DD)/2 to the input line IL.sub.(M+i). Accordingly, the reversal of sign in the weighting of the parallel sign bits of the bit-serial synapse input signals is done while performing the partial weighted summations.
FIG. 10 shows a modification of the FIG. 4 apparatus in which the multiplexers MX.sub.1, MX.sub.2, MX.sub.3, . . . MX.sub.M, that each select from between two operating voltages the voltage applied to a corresponding one of the input lines IL.sub.1, IL.sub.2, IL.sub.3, . . . IL.sub.M are replaced by multiplexers MX.sub.1 ', MX.sub.2 ', . . . MX.sub.M ' that each select from among the V.sub.DD, V.sub.SS and (V.sub.SS +V.sub.DD)/2 operating voltages the voltage applied to a corresponding one of the input lines IL.sub.1, IL.sub.2, IL.sub.3, . . . IL.sub.M. The multiplexers MX.sub.1 ', MX.sub.2 ', . . . MX.sub.M ' are controlled in the FIG. 10 apparatus similarly to the way they are in FIG. 9 apparatus, again to provide the reversal of sign in the weighting of the parallel sign bits of the bit-serial synapse input signals while performing the partial weighted summations.
FIG. 11 shows the apparatus for performing the final weighted summation of partial weighted summation results from the FIG. 9 or 10 apparatus. The FIG. 11 apparatus differs from the FIG. 5 apparatus in that: there are no selective complementors RCMP.sub.1, RCMP.sub.2, . . . RCMP.sub.N included in the respective connections of the A-to-D converters RADC.sub.1, RADC.sub.2 . . . RADC.sub.N to the accumulators RACC.sub.1, RACC.sub.2, . . . RACC.sub.N ; and the accumulators invariably have ZERO carry inputs.
FIG. 12 is a schematic diagram of a neural net layer similar to the neural net layer of FIG. 6, but includes either of the FIG. 9 and FIG. 10 apparatuses as a weighted summation network WSN.sub.1 ' for performing a plurality of weighted summation procedures, so the reversal of sign in the weighting of the parallel sign bits of the bit-serial synapse input signals takes place in the weighted summation network WSN.sub.1 '. Where the weighted summation procedures performed in the weighted summation network WSN.sub.1 ' generate partial weighted summation results, a weighted summation network WSN.sub.2 ' comprising the FIG. 11 apparatus is used to generate final weighted summation results for application to an array of non-linear processors NLP.sub.1, NLP.sub.2, . . . NLP.sub.N. The non-linear processors NLP.sub.1, NLP.sub.2, . . . NLP.sub.N have their analog axonal output signals digitized in respective A-to-D converters RADC.sub.1, RADC.sub.2, . . . RADC.sub.N, for application to the succeeding circuitry NEXT.
The nature of the analog-to-digital converters RADC.sub.i has thusfar not been considered in detail. Indeed, a wide variety of A-to-D converters can be used in implementing the invention. Some A-to-D converter structures are capable of accepting input signals in charge form, and in such a structure the charge sensing means and the analog-to-digital converting means specified in certain of the claims appended to this application will both be provided. The bit-slicing of input signals already slows data word throughput rate, so oversampling A-to-D converters, as of the sigma-delta type will not be favored over successive-approximation A-to-D converters unless quite slow data processing rates are acceptable. Where fairly fast data processing rates are required, flash A-to-D converters will be preferred oversuccessive-approximation A-to-D converters. The resistive ladder used to scale reference voltage in a flash A-to-D converter can be shared with other flash A-to-D converters within the same monolithic integrated circuit, economizing the amount of hardware associated with this approach to analog-to-digital conversion.
FIG. 13, comprising component FIGS. 13A and 13B, shows a representative modification that can be manifoldly made to a neural net as shown in FIGS. 2, 5 and 6 or in FIGS. 3, 5 and 6. Each neural net layer comprises a capacitive network viewed as having a plurality M in number of columns of capacitive weights and a plurality N in number of rows of capacitive weights. The description that follows focuses on the intersection of an i.sup.th of these columns with a j.sup.th of these rows, where i is a column index from 1 to M and j is a row index from 1 to N.
A respective modification is made near each set of intersections of an output line OL.sub.j with input lines IL.sub.i and IL.sub.(M+i) driven by opposite senses of a synapse input signal x.sub.i. Such modifications together make the neural net capable of being trained. Each capacitor pair C.sub.i,j and C.sub.(M+i),j of the FIG. 2 or 3 portion of the neural net is to be provided by a pair of digital capacitors DC.sub.i,j and DC.sub.(M+i),j. Capacitors DC.sub.i,j and DC.sub.(M+i),j are preferably of the type described by W. E. Engeler in his U.S. Pat. No. 5,039,871, issued Aug. 13, 1991, entitled "CAPACITIVE STRUCTURES FOR WEIGHTED SUMMATION, AS USED IN NEURAL NETS". The capacitances of DC.sub.i,j and DC.sub.(M+i),j are controlled in complementary ways by a digital word and its one's complement, as drawn from a respective word-storage element WSE.sub.i,j in an array of such elements located interstitially among the rows of digital capacitors and connected to form a memory. This memory may, for example, be a random access memory (RAM) with each word-storage element WSE.sub.i,j being selectively addressable by row and column address lines controlled by address decoders. Or, by way of further example, this memory can be a plurality of static shift registers, one for each column j. Each static shift register will then have a respective stage WSE.sub.i,j for storing the word that controls the capacitances of each pair of digital capacitors DC.sub.i,j and DC.sub.(M+i),j.
The word stored in word storage element WSE.sub.i,j may also control the capacitances of a further pair of digital capacitors DC.sub.i,(N+j) and DC.sub.(M+i),(N+j), respectively. Capacitors DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) are also preferably of the type described by W. E. Engeler in his U.S. Pat. No. 5,039,871, issued Aug. 13, 1991, entitled "CAPACITIVE STRUCTURES FOR WEIGHTED SUMMATION, AS USED IN NEURAL NETS". The capacitors DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) connect between "ac ground" and input lines IL.sub.i and IL.sub.(M+i), respectively, and form parts of the loading capacitors CL.sub.i and CL.sub.(M+i), respectively. The capacitances of DC.sub.(M+i),(N+j) and DC.sub.i,j are similar to each other and changes in their respective values track each other. The capacitances of DC.sub.i,(N+j) and DC.sub.(M+i),j are similar to each other and changes in their respective values track each other. The four digital capacitors DC.sub.i,j, DC.sub.(M+i), DC.sub.i,(N+j) and DC.sub.(M+i),(N+ j) are connected in a bridge configuration having input terminals to which the input lines IL.sub.i and IL.sub.(M+i) respectively connect and having output terminals connecting to output line OL.sub.j and to ac ground respectively. This bridge configuration facilitates making computations associated with back-propagation programming by helping make the capacitance network bilateral insofar as voltage gain is concerned. Alternatively, where the computations for back-propagation programming are done by computers that do not involve the neural net in the computation procedures, the neural net need not include the digital capacitors DC.sub.i,(N+j) and DC.sub.(M+i),(N+j). These digital capacitors DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) are not needed either if very large loading capacitors are placed on the output lines OL.sub.j, but this alternative undesirably reduces sensitivity of the row charge-sensing amplifier RQS.sub.j.
A respective column driver CD.sub.i applies complementary bit-sliced digital signals to each pair of input lines IL.sub.i and IL.sub.(M+i). The column driver CD.sub.i comprises the multiplexers MX.sub.i and MX.sub.(M+i) connected per FIG. 2 or per FIG. 3. When the neural net modified per FIG. 13 is being operated normally, following programming, the .phi..sub.P signal applied to a mode control line MCL is a logic ZERO. This ZERO conditions a respective input line multiplexer ILM.sub.i to connect the non-inverting output port of each column driver CD.sub.i to input line IL.sub.i. The .phi..sub.P signal on mode control line MCL being a ZERO also conditions a respective input line multiplexer ILM.sub.(M+i) to connect the inverting output port of each column driver CD.sub.i to input line IL.sub.(M+i).
A ZERO on mode control line MCL also conditions each output line multiplexer OLM.sub.j of an N-numbered plurality thereof to select the output line OL.sub.j to the input port of a respective row charge-sensing amplifier RQS.sub.j that performs a charge-sensing operation for output line OL.sub.j. When .phi..sub.P signal on mode control line MCL is a ZERO, the input signal x.sub.i induces a total change in charge on the capacitors DC.sub.i,j and DC.sub.(M+i),j proportional to the difference in their respective capacitances. The resulting displacement current flow from the input port of the row charge-sensing amplifier RQS.sub.j requires that there be a corresponding displacement current flow from the Miller integrating capacitor CI.sub.j in the row charge-sensing amplifier RQS.sub.j charging that capacitor to place thereon a voltage v.sub.j defined as follows. ##EQU6##
The voltage V.sub.j is supplied to an analog-to-digital converter RADC.sub.j for the j.sup.th row of the capacitive network, which digitizes voltage V.sub.j to generate a digital signal y.sub.j for the j.sup.th row that is assumed to be a two's complement digital signal. The digitized V.sub.j signal is supplied to a selective complementor RCMP.sub.j for the j.sup.th row, which passes the digitized V.sub.j signal without change for each of the bit slices of the input signals x.sub.1, x.sub.2, . . . x.sub.M except the sign bit slices. For the sign bit slices the selective complementor RCMP.sub.j one's complements the bits of the digitized V.sub.j signal and adds unity thereto. The selective complementor RCMP.sub.j supplies its output signal to an accumulator RACC.sub.j for the j.sup.th row of the capacitive network. The accumulator RACC.sub.j accumulates the bit slices of each set of samples of the input signals x.sub.1, x.sub.2, . . . x.sub.M as weighted for the output line OL.sub.j, sensed and digitized; the accumulator result .SIGMA.y.sub.j is latched by a respective row latch RLCH.sub.j for the j.sup.th row; and the accumulator RACC.sub.j is then reset to zero. At the same time the accumulator RACC.sub.j is reset to zero, to implement dc-restoration a reset pulse .phi..sub.R is supplied to the charge sensing amplifier RQS.sub.j for the j.sup.th row and to the charge sensing amplifiers for the other rows. During the dc-restoration all x.sub.i are "zero-valued".
During training, the .phi..sub.P signal applied to mode control line MCL is a logic ONE, which causes the output line multiplexer OLM.sub.j to disconnect the output line OL.sub.j from the charge sensing amplifier RQS.sub.j and to connect the output line OL.sub.j to receive a bit-sliced .delta..sub.j error term. This .delta..sub.j error term is generated by a row driver RD.sub.j responsive to the bit-serial digital product output signal of a digital multiplier BSM.sub.j, responsive to a signal .DELTA..sub.j and to the slope stored in the slope latch SLCH.sub.j. The term .DELTA..sub.j for the output neural net layer is the difference between z.sub.j actual value and its desired value d.sub.j. The term .DELTA..sub.j for a hidden neural net layer is the .DELTA..sub.j output of the succeeding neural net layer during the back-propagation procedure.
The row driver RD.sub.j is shown as a type having differential outputs which can be provided by a pair of multiplexers similar to input multiplexers MX.sub.i and MX.sub.(M+i) connected as in FIG. 2 or as in FIG. 3. Since output of only one sense is required to drive the single-ended output line OL.sub.j, as used in the neural net layers shown in FIGS. 13 and 20, however, it is preferable from the standpoint of simplifying hardware to use row drivers RD.sub.j that consist of but a single multiplexer. Indeed, the row driver function may be subsumed in the bit-serial multiplier BSM.sub.j. In the neural net layers shown in FIGS. 14, 16, 21 and 23 the row driver RD.sub.j must be of a type having differential outputs, as can be provided by a pair of multiplexers similar to input multiplexers MX.sub.i and MX.sub.(M+i) connected as in FIG. 2 or as in FIG. 3.
During training, the .phi..sub.P signal applied to the mode control line MCL being a ONE also causes the input line multiplexers ILM.sub.i and ILM.sub.(M+i) to disconnect the input lines IL.sub.i and IL.sub.(M+i) from the column driver CD.sub.i output ports and connect them instead to the non-inverting and inverting input terminals of a column charge-sensing amplifier CQS.sub.i. Each successive partial summation of the weighted .delta..sub.1, .delta..sub.2, . . . .delta..sub.N, signals induces a differential change in charge between input lines IL.sub.j and IL.sub.(M+i) proportional to (C.sub.i,j -C.sub.(M+i),j), which differential change in charge is sensed using the column charge sensing amplifier CQS.sub.i, then digitized in a respective column analog-to-digital converter CADC.sub.i for the i.sup.th column. The digital output signal from the column A-to-D converter CQS.sub.i is supplied to a selective complementor CCMP.sub.i for the i.sup.th column, which responds to a SIGN BIT FLAG to selectively one's complement the bits of that digital signal as applied to an accumulator CACC.sub.i for the i.sup.th column. The selective complementor CCMP.sub.i otherwise supplies the accumulator CACC.sub.i an input signal the bits of which correspond to the bits of the digital output signal from the column A-to-D converter CQS.sub.i. The accumulator CACC.sub.i accumulates the successive partial summations of the weighted .delta..sub.1, .delta..sub.2, . . . .delta..sub.N, signals to generate the final weighted summation result .DELTA..sub.i supplied to a parallel-in/serial-out register CPISO.sub.i for the i.sup.th column. The register CPISO.sub.i supplies the final weighted summation result .DELTA..sub.i in bit-serial format for the bit-serial digital multiplier of the preceding neural net layer if such there be.
Resetting of the column charge sensing amplifier CQS.sub.i is normally done shortly after a ZERO to ONE transition appears in the .phi..sub.P signal applied to mode control line MCL and may also be done at other times. This procedure corrects for capacitive unbalances on the input lines IL.sub.i and IL.sub.(M+i) during back-propagation computations that follow the resetting procedure.
FIG. 14, comprising component FIGS. 14A and 14B, shows a representative modification that can be manifoldly made to a neural net as shown in FIGS. 4, 5 and 6. The four digital capacitors DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) are connected in a bridge configuration having input terminals connecting from the input line IL.sub.i and from a-c ground respectively and having output terminals connecting to output lines OL.sub.j and OL.sub.(N+j) respectively.
When the neural net layer per FIGS. 4, 5 and 6 is being operated normally, following programming, the .phi..sub.P signal applied to a mode control line MCL is a logic ZERO. This ZERO on mode control line MCL conditions each output line multiplexer OLM.sub.j of an N-numbered plurality thereof to select the output line OL.sub.j to the inverting input terminal of a fully differential amplifier in an associated differential charge sensing amplifier RQS.sub.j for the j.sup.th row. This ZERO on mode control line MCL also conditions each output line multiplexer OLM.sub.(N+j) to the non-inverting input terminal of the fully differential amplifier in the associated differential charge sensing amplifier RQS.sub.j. When .phi..sub.P signal on mode control line MCL is a ZERO, the input signal x.sub.i induces a total differential change in charge on the capacitors DC.sub.i,j and DC.sub.i,(N+j) proportional to the difference in their respective capacitances. The resulting displacement current flows needed to keep the input terminals of differential amplifier DA.sub.j substantially equal in potential requires that there be corresponding displacement current flow from the integrating capacitor CI.sub.j and CI.sub.(N+j) differentially charging those charging capacitors to place thereacross a differential voltage v.sub.j defined as follows. ##EQU7##
The A-to-D converter RADC.sub.j for the j.sup.th row is of a type capable of digitizing the balanced output signal from the differential charge sensing amplifier RQS.sub.j. The selective complementor RCMP.sub.j, the accumulator RACC.sub.j, the row latch RLCH.sub.j for each j.sup.th row, the digital adder ADD.sub.j, the digital non-linear processor NLP.sub.j, the slope latch SLCH.sub.j, and the digital multiplier BSM.sub.j correspond with those elements as described in connection with FIG. 13. The bit-serial product signal from the digital multiplier BSM.sub.j is supplied to the row driver RD.sub.j, which can take either of the forms the column driver CD.sub.i can take in FIG. 13. During training, the .phi..sub.P signal applied to the mode control line MCL being a ONE causes the output line multiplexers OLM.sub.j and OLM.sub.(N+j) to connect the output lines OL.sub.j and OL.sub.(N+j) for receiving from the row driver RD.sub.j complementary digital responses to the bit-serial product from the digital multiplier BSM.sub.j. The ONE on the mode control line MCL also causes each input line multiplexer ILM.sub.i to disconnect each input line IL.sub.i from the output port of the logic inverter INV.sub.i and connect it instead to supply charge on a single-ended basis to the input port of a column charge-sensing amplifier CQS.sub.i.
FIG. 15, comprising FIGS. 15A and 15B, shows a representative modification that can be made manifold times to the neural net shown in FIGS. 9, 11 and 12 or in FIGS. 10, 11 and 12, for the programmable weighting of the capacitances used in performing weighted summations of synapse signals. In this type of neural net the signs of the digital signals being weighted and summed are taken into account in the partial weighted summation procedure performed on an analog basis, rather than being taken into account in the final weighted summation procedure performed on an digital basis as is the case in the type of neural net shown in FIG. 13. Accordingly, respective straight-through digital bussing replaces the selective complementor RCMP.sub.j for each j.sup.th row and the selective complementor CCMP.sub.i for each i.sup.th column. Accordingly, also, each column driver CD.sub.i comprises the multiplexers MX.sub.i and MX.sub.(M+i) connected per FIG. 9 or per FIG. 10, rather than the multiplexers MX.sub.i and MX.sub.(M+i) connected per FIG. 2 or per FIG. 3.
The row driver RD.sub.j ' is shown as a type having differential outputs which can be provided by a pair of multiplexers similar to input multiplexers MX.sub.i ' and MX.sub.(M+i) ' connected as in FIG. 8 or as in FIG. 9. Since output of only one sense is required to drive the single-ended output line OL.sub.j, as used in the neural net layers shown in FIGS. 15 and 22, however, it is preferable from the standpoint of simplifying hardware to use row drivers RD.sub.j ' that consist of but a single multiplexer. In the neural net layers shown in FIGS. 17 and 24 the row driver RD.sub.j ' must be of a type having differential outputs, as can be provided by a pair of multiplexers similar to input multiplexers MX.sub.i ' and MX.sub.(M+i) ' connected as in FIG. 8 or as in FIG. 9. The row driver RD.sub.j ' provides in the neural net layers shown in FIGS. 15, 17, 22 and 24 for taking care of the sign bit in the digital product .delta..sub.i in the partial weighted summation procedure, so it need not be taken care of in the final weighted summation procedure. Accordingly, the selective complementors CCMP.sub.i for each i.sup.th column are replaced by respective straight-through bussing.
In a variant of the FIG. 14 neural net layer not shown in the drawing, each row driver RD.sub.j is also replaced by a row driver RD.sub.j ' that takes either of the forms the column driver CD.sub.i can take in FIG. 15. In this variant, too, the row driver RD.sub.j ' provides for taking care of the sign bit in the digital product .delta..sub.i in the partial weighted summation procedure, so it need not be taken care of in the final weighted summation procedure. Accordingly, the selective complementors CCMP.sub.i for each i.sup.th column are replaced by respective straight-through bussing.
FIG. 16, comprising FIGS. 16A and 16B, shows a neural net layer in which both forward propagation and back propagation through a capacitive network are carried out with balanced signals, in which digital capacitors have their capacitance values programmed from respective word storage locations within a digital memory, and in which the signs of the digital signals being weighted and summed are taken into account in the final weighted summation procedure performed on a digital basis. The FIG. 16 neural net layer shares features with both the FIG. 13 and the FIG. 14 neural net layers. A respective set of four digital capacitors DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) is used at each intersection of an i.sup.th column with a j.sup.th row of the array of sets of digital capacitors, for weighting of forward propagated input signals during periods of normal operation, and for weighting of back propagated error signals during training periods. Each set of four digital capacitors DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) have their capacitance values programmed by a word stored in a word storage element WSE.sub.i,j of an interstital memory array IMA. Each set of four digital capacitors DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) are connected in a bridge configuration having input terminals respectively connecting from paired input lines IL.sub.i and IL.sub.(M+i) as in FIG. 13A and having output terminals respectively connecting to output lines OL.sub.j and OL.sub.(N+j) as in FIG. 14A. During periods of normal operation, a respective column driver CD.sub.i that comprises the multiplexers MX.sub.i and MX.sub.(M+i) connected per FIG. 2 or per FIG. 3 applies complementary bit-sliced digital input signals to each pair of input lines IL.sub.i and IL.sub.(M+i), as in FIG. 13A; and a charge sensing amplifier RQS.sub.j for each j.sup.th row of the array of sets of digital capacitors, is connected to differentially sense the charges on paired output lines OL.sub.j and OL.sub.(N+j) as in FIG. 14B. During training periods a respective row driver RD.sub.j constructed similarly to each column driver CD.sub. i applies to each pair of output lines OL.sub.j and OL.sub.(N+j) complementary bit-serial responses to digital product signal from a bit-serial digital multiplier BSM.sub.j, as in FIG. 14B; and a charge sensing amplifier CQS.sub.j for each i.sup.th column of the array of sets of digital capacitors is connected to differentially sense the charges on paired input lines IL.sub.i and IL.sub.(M+i), as in FIG. 13A.
FIG. 17, comprising FIGS. 17A and 17B, shows another neural net layer in which both forward propagation and back propagation through a capacitive network are carried out with balanced signals, and in which digital capacitors have their capacitance values programmed from respective word storage locations within a digital memory. In the FIG. 17 neural net layer the signs of the digital signals being weighted and summed are taken into account in the partial weighted summation procedure performed on an analog basis, rather than being taken into account in the final weighted summation procedure performed on a digital basis. The FIG. 17 neural net layer shares features with both the FIG. 15 and the FIG. 14 neural net layers. A respective set of four digital capacitors DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) is used at each intersection of an i.sup.th column with a j.sup.th row of the array of sets of digital capacitors, for weighting of forward propagated input signals during periods of normal operation, and for weighting of back propagated error signals during training periods. Each set of four digital capacitors DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) have their capacitance values programmed by a word stored in a word storage element WSE.sub.i,j of an interstital memory array IMA. Each set of four digital capacitors DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j) are connected in a bridge configuration having input terminals respectively connecting from paired input lines IL.sub.i and IL.sub.(M+i) as in FIG. 15A and having output terminals respectively connecting to output lines OL.sub.j and OL.sub.(N+j) as in FIG. 14A. During periods of normal operation, a respective column driver CD.sub.i that comprises the multiplexers MX.sub.i and MX.sub.(M+i) connected per FIG. 9 or per FIG. 10 applies complementary bit-sliced digital input signals to each pair of input lines IL.sub.i and IL.sub.(M+i), as in FIG. 15A; and a charge sensing amplifier RQS.sub.j for each j.sup.th row of the array of sets of digital capacitors, is connected to differentially sense the charges on paired output lines OL.sub.j and OL.sub.(N+j) as in FIG. 14B. During training periods a respective row driver RD.sub.j constructed similarly to each column driver CD.sub.i applies to each pair of output lines OL.sub.j and OL.sub.(N+j) complementary bit-serial responses to digital product signal from a bit-serial digital multiplier BSM.sub.j, as in FIG. 14B; and a charge sensing amplifier CQS.sub.j for each i.sup.th column of the array of sets of digital capacitors is connected to differentially sense the charges on paired input lines IL.sub.i and IL.sub.(M+i), as in FIG. 15A.
FIG. 18 shows apparatuses for completing the back-propagation computations, as may be used with a neural net having layers as shown in FIGS. 2, 5 and 6 or in FIGS. 3, 5 and 6 modified manifoldly per FIG. 13; or having layers as shown in FIGS. 4, 5 and 6 modified manifoldly per FIG. 14; or having layers as shown in FIGS. 9, 11 and 12 or in FIGS. 10, 11 and 12 modified manifoldly per FIG. 15; or having layers as shown in FIG. 16; or having layers as shown in FIG. 17. The weights at each word storage element WSE.sub.i,j in the interstitial memory array IMA are to be adjusted as the i column addresses and j row addresses are scanned row by row, one column at a time. An address scanning generator ASG generates this scan of i and j addresses shown applied to interstitial memory array IMA, assuming it to be a random access memory. The row address j is applied to a row multiplexer RM that selects bit-serial .delta..sub.j to one input of a bit-serial multiplier BSM.sub.0, and the column address i is applied to a column multiplexer CM that selects bit-serial x.sub.i to another input of the multiplier BSM.sub.0.
Multiplier BSM.sub.0 generates in bit-serial form the product x.sub.i .delta..sub.j as reduced by a scaling factor .eta., which is the increment or decrement to the weight stored in the currently addressed word storage element WSE.sub.ij in the memory array IMA. A serial-in/parallel-out register SIPO is used to convert the .eta.x.sub.i .delta..sub.j signal from bit-serial to the parallel-bit form of the signals stored in the memory array IMA. The former value of weight stored in word storage element WSE.sub.ij is read from memory array IMA to a temporary storage element, or latch, TS. This former weight value is supplied as minuend to a digital substractor SUB, which receives as subtrahend .eta.x.sub.i .delta..sub.j from the serial-in/parallel-out register SIPO. The resulting difference is the updated weight value which is written into word storage element WSE.sub.i,j in memory array IMA to replace the former weight value.
FIG. 19 shows how trained neural net layers L.sub.0, L.sub.1 and L.sub.2 are connected together in a system that can be trained. Each of the neural net layers is similar to neural net layers as shown in FIGS. 2, 5 and 6 or in FIGS. 3, 5 and 6 modified manifoldly per FIG. 13; or in FIGS. 4, 5 and 6 modified manifoldly per FIG. 14; or in FIGS. 9, 11 and 12 or in FIGS. 10, 11 and 12 modified manifoldly per FIG. 15; or in FIG. 16; or in FIG. 17. Each of the neural net layers has a respective back-propagation processor per FIG. 18 associated therewith.
L.sub.0 is the output neural net layer that generates z.sub.j output signals and is provided with a back-propagation processor BPP.sub.0 with elements similar to those shown in FIG. 18 for updating the weights stored in the interstitial memory array of L.sub.0. L.sub.1 is the first hidden neural net layer which generates z.sub.h output signals supplied to the output neural net layer as its x.sub.i input signals. These z.sub.i output signals are generated by layer L.sub.1 as its non-linear response to the weighted sum of its x.sub.h input signals. This first hidden neural net layer L.sub.1 is provided with a back-propagation processor BPP.sub.1 similar to BPP.sub.0. L.sub.2 is the second hidden neural net layer, which generates z.sub.h output signals supplied to the first hidden neural net layer as its x.sub.h input signals. These z.sub.h output signals are generated by layer L.sub.2 as its non-linear response to a weighted summation of its x.sub.g input signals. This second hidden layer is provided with a back-propagation processor similar to BPP.sub.0 and to BPP.sub.1.
FIG. 19 presumes that the respective interstitial memory array IMA of each neural net layer L.sub.0, L.sub.1, L.sub.2 has a combined read/write bus instead of separate read input and write output busses as shown in FIG. 18. FIG. 19 shows the .DELTA..sub.j, .DELTA..sub.i and .DELTA..sub.h signals being fed back over paths separate from the feed forward paths for z.sub.j, z.sub.i and z.sub.h signals, which separate paths are shown to simplify conceptualization of the neural net by the reader. In actuality, as previously described, a single path is preferably used to transmit z.sub.j in the forward direction and .DELTA..sub.j in the reverse direction, etc. Back-propagation processor BPP.sub.0 modifies the weights read from word storage elements in neural net layer L.sub.0 interstitial memory array by .eta. x.sub.i .delta..sub.j amounts and writes them back to the word storage elements in a sequence of read-modify-write cycles during the training procedure. Back-propagation processor BPP.sub.1 modifies the weights read from word storage elements in neural net layer L.sub.1 interstitial memory array by .eta. x.sub.h .delta..sub.i amounts and writes them back to the word storage elements in a sequence of read-modify-write cycles, during the training procedure. Back-propagation processor BPP.sub.2 modifies the weights read and storage elements in neural net layer L.sub.2 interstitial memory array by .eta. x.sub.g .delta..sub.h amounts and writes them back to the word storage element in a sequence of read-modify-write cycles during the training procedure.
FIG. 20, comprising component FIGS. 20A and 20B shows an alternative modification that can be manifoldly made to a neural net as shown in FIGS. 2, 5 and 6 or in FIGS. 3, 5 and 6 to give it training capability. This alternative modification seeks to avoid the need for a high-resolution bit-serial multiplier BSM.sub.0 and complex addressing during back-propagation calculations in order that training can be implemented. A respective up/down counter UDC.sub.i,j is used instead of each word storage element WSE.sub.i,j. Correction of the word stored in counter UDC.sub.i,j is done a count at a time; and the counter preferably has at least one higher resolution stage in addition to those used to control the capacitances of digital capacitors DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j). Each up/down counter UDC.sub.i,j has a respective counter control circuit CON.sub.i,j associated therewith. Each counter control circuit CON.sub.i,j may, as shown in FIG. 20A and described in detail further on in this specification, simply consist of an exclusive-OR gate XOR.sub.i,j.
Responsive to a SIGN BIT FLAG, a row sign bit latch RBL.sub.j latches the sign bit of .delta..sub.j, indicative of whether a row of weights should in general be decremented or incremented, to be applied via a respective row sign line RSL.sub.j to all counter control circuits (CON.sub.i,j for i=1, . . . ,M) in the j.sup.th row associated with that row sign bit latch RBL.sub.j. Before making a back-propagation calculation, a respective column sign bit latch CBL.sub.i latches the sign bit of x.sub.i for each i.sup.th columnar position along the row which is to be updated, to provide an indication of whether it is likely the associated weight should be decremented or incremented. Each column sign bit latch CBL.sub.i is connected to broadcast its estimate via a respective column sign line CSL.sub.i to all counter control circuits (CON.sub.i,j for j=1, . . . N) in the i.sup.th column associated with that column sign bit latch CBL.sub.i. Responsive to these indications from sign bit latches CBL.sub.i and RBL.sub.j, each respective counter control circuit CON.sub.i,j decides in which direction up/down counter UDC.sub.i,j will count to adjust the weight control signals D.sub.i,j and D.sub.i,j stored therein
The counter control circuitry CON.sub.i,j should respond to the sign of +.delta..sub.j being positive, indicating the response v.sub.j to be too positive, to decrease the capacitance to output line OL.sub.j that is associated with the signal x.sub.i or -x.sub.i that is positive and to increase the capacitance to output line OL.sub.j that is associated with the signal -x.sub.i or x.sub.i that is negative, for each value of i. The counter control circuitry CON.sub.i,j should respond to the sign of +.delta..sub.j being negative, indicating the response v to be too negative, to increase the capacitance to output line OL.sub.j that is associated with the signal -x.sub.i or x.sub.i that is negative and to decrease the capacitance to output line OL.sub.j that is associated with the signal x.sub.i or -x.sub.i that is positive. Accordingly, counter control circuitry CON.sub.i,j may simply consist of a respective exclusive-OR gate XOR.sub.i,j as shown in FIG. 20A, if the following presumptions are valid.
Each of the digital capacitors DC.sub.i,j and DC.sub.(M+i),(N+j) is presumed to increase or decrease its capacitance as D.sub.i,j is increased or decreased respectively. Each of the digital capacitors DC.sub.(M+i),j and DC.sub.i,(N+j) is presumed to increase or decrease its capacitance as D.sub.i,j is increased or decreased respectively. A ZERO applied as up/down signal to up/down counter UDC.sub.i,j is presumed to cause counting down for D.sub.i,j and counting up for D.sub.i,j. A ONE applied as up/down signal to up/down counter UDC.sub.i,j is presumed to cause counting up for D.sub.i,j and counting down for D.sub.i,j. Column sign detector CSD.sub.i output indication is presumed to be a ZERO when x.sub.i is not negative and to be a ONE when x.sub.i is negative. Row sign detector RSD.sub.j output indication is presumed to be a ZERO when .delta..sub.j is not negative and to be a ONE when .delta..sub.j is negative. Since the condition where x.sub.i or .delta..sub. j is zero-valued is treated as if the zero-valued number were positive, forcing a false correction which is in fact not necessary, and thus usually creating the need for a counter correction in the next cycle of back-propagation training, there is dither in the correction loops. However, the extra stage or stages of resolution in each up/down counter UDC.sub.i,j prevent high-resolution dither in the feedback correction loop affecting the capacitances of DC.sub.i,j, DC.sub.(M+i),j, DC.sub.i,(N+j) and DC.sub.(M+i),(N+j).
FIG. 21, comprising component FIGS. 21A and 21B shows an alternative modification that can be manifoldly made to a neural net as shown in FIGS. 4, 5 and 6 to give it training capability. A respective up/down counter UDC.sub.i,j is used instead of each word storage element WSE.sub.i,j in this FIG. 21 alternative modification; and FIG. 21 differs from FIG. 14 in substantially the same respects that FIG. 20 differs from FIG. 13.
FIG. 22, comprising component FIGS. 22A and 22B shows an alternative modification that can be manifoldly made to a neural net as shown in FIGS. 9, 11 and 12 or in FIGS. 10, 11 and 12 to give it training capability. A respective up/down counter UDC.sub.i,j is used instead of each word storage element WSE.sub.i,j in this FIG. 22 alternative modification; and FIG. 22 differs from FIG. 15 in substantially the same respects that FIG. 20 differs from FIG. 13. In FIG. 22 as in FIG. 15 the signs of the digital signals being weighted and summed are taken into account in the partial weighted summation procedure performed on an analog basis, rather than being taken into account in the final weighted summation procedure performed on an digital basis as is the case in the types of neural nets shown in FIGS. 13 and 20. Accordingly, a neural net manifoldly using the FIG. 22 modification differs from a neural net manifoldly using the FIG. 20 modification in that respective straight-through digital bussing replaces the selective complementor RCMP.sub.j for each j.sup.th row and the selective complementor CCMP.sub.i for each i.sup.th column. Accordingly, also, in a neural net manifoldly using the FIG. 22 modification differs from a neural net manifoldly using the FIG. 20 modification in that each column driver CD.sub.i comprises the multiplexers MX.sub.i and MX.sub.(M+i) connected per FIG. 9 or per FIG. 10, rather than the multiplexers MX.sub.i and MX.sub.(M+i) connected per FIG. 2 or per FIG. 3. Except for these differences, a neural net manifoldly using the FIG. 22 modification resembles, in both structure and operation, a neural net manifoldly using the FIG. 20 modification.
In a variant of the FIG. 21 neural net layer not shown in the drawing, the row driver RD.sub.j takes either of the forms the column driver CD.sub.i can take in FIG. 22. This provides for taking care of the sign bit in the digital product .delta..sub.i in the partial weighted summation procedure, so it need not be taken care of in the final weighted summation procedure. Accordingly, the selective complementors CCMP.sub.i for each i.sup.th column are replaced by respective straight-through bussing. The extraction of the sign bit of the digital product .delta..sub.i for application to the row sign line RSL.sub.j is done before the row driver RD.sub.j, of course.
FIG. 23, comprising component FIGS. 23A and 23B shows an alternative modification that can be manifoldly made to a neural net using layers as shown in FIG. 16 to give it training capability. A respective up/down counter UDC.sub.i,j is used instead of each word storage element WSE.sub.i,j in this FIG. 23 alternative modification; and FIG. 23 differs from FIG. 16 in similar respects as FIG. 20 differs from FIG. 13 and as FIG. 21 differs from FIG. 14.
FIG. 24, comprising component FIGS. 24A and 24B shows an alternative modification that can be manifoldly made to a neural net using layers as shown in FIG. 17 to give it training capability. A respective up/down counter UDC.sub.i,j is used instead of each word storage element WSE.sub.i,j in this FIG. 24 alternative modification; and FIG. 24 differs from FIG. 17 in similar respects as FIG. 21 differs from FIG. 14 and as FIG. 22 differs from FIG. 15. Hybrids of the FIGS. 23 and 24 neural net layers are possible, handling the sign bits of the digital input signals as in either one of the FIGS. 23 and 24 neural net layers and handling the sign bits of the digital error signals as in the other of the FIGS. 23 and 24 neural net layers.
FIG. 25 shows the construction of counter UDC.sub.i,j being one that has a plurality of binary counter stages BCS.sub.1, BCS.sub.2 BCS.sub.3 that provide increasingly more significant bits of the weight control signal D.sub.i,j and of its one's complement D.sub.i,j. FIG. 26 shows the logic within each binary counter stage which is implemented with MOS circuitry that is conventional in the art. FIGS. 25 and 26 make it clear that the opposite directions of counting for D.sub.i,j and D.sub.i,j can be controlled responsive to a ZERO or ONE up/down control signal in either of two ways, depending on whether D.sub.i,j is taken from Q outputs of the flip-flops and D.sub.i,j is taken from their Q outputs, as shown, or whether D.sub.i,j is taken from the Q outputs of the flip-flops and D.sub.i,j is taken from their Q outputs. If the latter choice had been made instead, each counter control circuit CON.sub.i,j would have to consist of a respective exclusive-NOR circuit, or alternatively the CSD.sub.i and RSD.sub.j sign detectors would have to be of opposite logic types, rather than of the same logic type.
While bit-slicing the plural-bit digital synapse samples and processing the bit slices through the same capacitive weighting network is advantageous in that it provides good guarantee that the partial weighted summation results track are scaled in exact powers of two respective to each other, speedier processing can be required which necessitates bit-slicing be done on a spatial-division-multiplexing, rather than time-division multiplexing, basis. Each bit slice is then processed to generate a respective partial weighted summation result in a respective capacitive weighting network similar to the capacitive weighting networks respectively used to generate respective partial weighted summation results for the other bit slices, and a network of digital adders is used instead of an accumulator for generating each final weighted summation result by combining appropriately shifted partial weighted summation results.
One skilled in the art of digital design and acquainted with this and other disclosures by the inventor as catalogued in the "Background of the Invention" portion of this specification will be enabled to design readily a number of other variants of the preferred neural net layers shown in the drawing, and this should be borne in mind in construing the scope of the claims appended to this specification, which claims are intended to include all such obvious variants of the preferred neural net layers shown in the drawing.
Claims
  • 1. A system for performing N weighted summations of digital input signals that are M in number and are identified by respective ones of consecutive ordinal numbers first through Mth, N being an integer greater than zero and M being an integer greater than one, said system comprising:
  • means for supplying said first through Mth digital input signals in bit-serial form, with B bits per arithmetic word, B being an integer more than one, each bit being expressed in terms of respective voltage levels for its being a ZERO or for its being a ONE;
  • means for N-fold weighting of each successive bit of said first through Mth digital input signals to generate respective first through N.sup.th summands, each manifested in terms of at least one respective quantity of electric charge, which means is of a type including a network of capacitive elements and means for connecting them so their capacitances determine the weightings respectively applied to voltage levels expressing current bits of said first through Mth digital input signals when generating said first through N.sup.th summands in accordance with Colomb's Law;
  • means for combining concurrently generated summands identified by the same ordinal numbers to generate first through N.sup.th partial weighted summations, each manifested in terms of at least one respective quantity of electric charge;
  • means for sensing each of said first through N.sup.th partial summations, as manifested in terms of at least one respective quantity of electric charge, to generate first through N.sup.th analog signals respectively;
  • means for digitizing said first through N.sup.th analog signals to generate first through N.sup.th digital partial weighted summations respectively; and
  • means for digitally accumulating said first through N.sup.th digital partial weighted summations with appropriate respective weighting to generate first through N.sup.th digital final weighted summations.
  • 2. A neural net layer comprising a system as set forth in claim 1 for performing N weighted summations of digital input signals that are M in number, in combination with:
  • means for digitally non-linearly processing each of said first through N.sup.th digital final weighted summations with a sigmoidal system function to generate a respective digital axonal response.
  • 3. A neural net layer as set forth in claim 2 wherein said means for digitally non-linearly processing each of said first through N.sup.th digital final weighted summations with a sigmoidal system function to generate a respective digital axonal response consists of, for each j.sup.th one of said first through N.sup.th digital final weighted summations, a respective j.sup.th digital non-linear processor that comprises:
  • means for determining the absolute value of said j.sup.th weighted summation result in digital form;
  • a j.sup.th window comparator for determining into which of a plurality of amplitude ranges falls said absolute value of said j.sup.th weighted summation result in digital form;
  • means for selecting a j.sup.th digital intercept value in accordance with the range into which said absolute value of said j.sup.th weighted summation result in digital form is determined by said j.sup.th window comparator to fall;
  • means for selecting a j.sup.th digital slope value in accordance with the range into which said absolute value of said j.sup.th weighted summation result in digital form is determined by said j.sup.th window comparator to fall;
  • means for multiplying said absolute value of said j.sup.th weighted summation result in digital form by said selected digital slope value to generate a j.sup.th digital product;
  • means for adding said j.sup.th digital product and said j.sup.th digital intercept value to generate an absolute value representation of said j.sup.th digital axonal response; and
  • means for determining the polarity of said j.sup.th weighted summation result in digital form and assigning the same polarity to said j.sup.th absolute value representation of said digital axonal response, thereby to generate said j.sup.th digital axonal response.
  • 4. A system as set forth in claim 1 wherein said capacitive elements in said means for N-fold weighting comprise a plurality M.times.N in number of sets of digital capacitors, each of said sets of digital capacitors corresponding to a different combination of one of said first through Mth digital input signals and of one of said first through N.sup.th summands; wherein a memory is included in said system, said memory having a plurality M.times.N in number of word storage elements for temporarily storing respective binary codes written thereinto, at least certain ones of which binary codes are susceptible of being re-written from time to time; and wherein the digital capacitors in each said set of digital capacitors have their values programmed by a binary code stored in a respective word storage element of said memory.
  • 5. A system as set forth in claim 1 wherein said capacitive elements in said means for N-fold weighting comprise a plurality of sets of digital capacitors considered as being arrayed in M columns and in N rows, each of said sets of digital capacitors corresponding to the one of said first through Mth digital input signals identified by the same ordinal number as the column it is arrayed in, and each of said sets of digital capacitors corresponding to the one of said first through N.sup.th summands identified by the same ordinal number as the row it is arrayed in; wherein a plurality M.times.N in number of up-down binary counters is included in said system, those said up-down binary counters considered as being arrayed in M columns and in N rows; wherein the digital capacitors in each said set of digital capacitors have their values programmed by the count contained in the one of said up-down binary counters considered as being in the same column and row as that particular said set of digital capacitors; wherein each said up-down binary counter is selectively supplied with pulses to be counted; and wherein each said up-down binary counter is provided with respective means for supplying a respective up/down command to control the direction of its counting.
  • 6. A system as set forth in claim 1 wherein said means for combining concurrently generated summands identified by the same ordinal numbers to generate first through N.sup.th partial weighted summations comprise
  • N output lines, identified by respective ones of consecutive ordinal numbers first through Nth and arranged for receiving respective quantities of electric charge as manifestions of said first through N.sup.th summands respectively; and wherein said means for N-fold weighting of each successive bit of said first through Mth digital input signals to generate respective first through N.sup.th summands comprises:
  • M pairs of input lines, said input lines identified by respective ones of consecutive ordinal numbers first through 2Mth, each pair including input lines identified by ordinal numbers M apart and each pair being identified by the same ordinal number as the lower of the ordinal numbers identifying the input lines in said pair;
  • a respective column driver corresponding to each said pair of input lines, said respective column driver for each pair of input lines being identified by the same ordinal numbers as said pair of input lines with which it corresponds and responding to each successive bit of said first through Mth digital input signals for applying respective drive voltages on the input line identified by the corresponding ordinal number and on the input line identified by the ordinal number M higher; and
  • a plurality of weighting capacitors having respective first plates and second plates, each of said weighting capacitors having its first plate connected to one of said first through 2Mth input lines and having its second plate connected to one of said first through Nth output lines thereby to provide one of said capacitive elements.
  • 7. A system as set forth in claim 6 wherein said means for digitally accumulating said first through N.sup.th digital partial weighted summations to generate first through N.sup.th digital final weighted summations is connected to accumulate during each sign bit slice once in opposite sense to respective single accumulations during all other bit slices.
  • 8. A system as set forth in claim 6 wherein each i.sup.th one of said first through N.sup.th column drivers respectively comprises:
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE, for placing a first prescribed voltage level on said i.sup.th input line and placing a second prescribed voltage level on said (M+i).sup.th input line; and
  • means responding to each successive bit of said i.sup.th digital input signal that is a ZERO, for placing said second prescribed voltage level on said i.sup.th input line and placing said first prescribed voltage level on said (M+i).sup.th input line.
  • 9. A system as set forth in claim 6 wherein each i.sup.th one of said first through N.sup.th column drivers respectively comprises:
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE, for placing a first prescribed voltage level on the i.sup.th input line and placing a second prescribed voltage level on said (M+i).sup.th input line; and
  • means responding to each successive bit of said i.sup.th digital input signal that is a ZERO, for placing said second prescribed voltage level on the i.sup.th input line and placing said first prescribed voltage level on said (M+i).sup.th input line.
  • 10. A system as set forth in claim 1 wherein said means for combining concurrently generated summands identified by the same ordinal numbers to generate first through N.sup.th partial weighted summations comprise
  • N pairs of output lines, said output lines identified by respective ones of consecutive ordinal numbers first through 2N.sup.th, each pair of output lines including output lines identified by ordinal numbers N apart, each pair of output lines being identified by the same ordinal number as the lower of the ordinal numbers identifying the output lines in said pair, and each pair of output lines arranged for receiving respective quantities of electric charge the difference between which is a respective one of the manifestions of said first through N.sup.th summands; and wherein said means for N-fold weighting of each successive bit of said first through Mth digital input signals to generate respective first through N.sup.th summands comprises:
  • M input lines, said input lines identified by respective ones of consecutive ordinal numbers first through N.sup.th ;
  • respective means respectively responding to each successive bit of said first through M.sup.th single-bit digital input signals that is a ZERO, for placing a first prescribed voltage level on the input line identified by the corresponding ordinal number;
  • respective means respectively responding to each successive bit of said first through M.sup.h single-bit digital input signals that is a ONE, for placing a second voltage level on the input line identified by the corresponding ordinal number; and
  • a plurality of weighting capacitors having respective first plates and second plates, each of said weighting capacitors having its first plate connected to one of said first through 2N.sup.th output lines and having its second plate connected to one of said first through M.sup.th input lines thereby to provide one of said capacitive elements; and wherein said means for sensing each of said first through N.sup.th partial summations, as manifested in terms of at least one respective quantity of electric charge, to generate first through N.sup.th analog signals respectively, comprises
  • a plurality N in number of differential charge sensing amplifiers identified by respective ones of consecutive ordinal numbers first through N.sup.th, each of said first through N.sup.th charge sensing means connected for sensing the differential charge between said output line identified by the same ordinal number it is and said output line identified by the ordinal number N higher, each of said first through N.sup.th charge sensing means responding to generate a respective result of performing N weighted summations of said first through M.sup.th single-bit digital input signals.
  • 11. A system as set forth in claim 10 wherein said means for digitally accumulating said first through N.sup.th digital partial weighted summations to generate first through N.sup.th digital final weighted summations is connected to accumulate during each sign bit slice once in opposite sense to respective single accumulations during all other bit slices.
  • 12. A system as set forth in claim 1 wherein said means for digitally accumulating said first through N.sup.th digital partial weighted summations to generate first through N.sup.th digital final weighted summations is connected to accumulate always in the same sense during sign bit slices as during all other bit slices; wherein said means for combining concurrently generated summands identified by the same ordinal numbers to generate first through N.sup.th partial weighted summations comprises
  • N output lines, identified by respective ones of consecutive ordinal numbers first through N.sup.th and arranged for receiving respective quantities of electric charge as manifestions of said first through N.sup.th summands respectively, and wherein said means for N-fold weighting of each successive bit of said first through Mth digital input signals to generate respective first through N.sup.th summands comprises:
  • means for generating a sign bit flag signal that has a first value only during a portion of each sign bit slice and that has a second value during all other bit slices;
  • M pairs of input lines, said input lines identified by respective ones of consecutive ordinal numbers first through 2M.sup.th, each pair including input lines identified by ordinal numbers M apart and each pair being identified by the same ordinal number as the lower of the ordinal numbers identifying the input lines in said pair;
  • a respective column driver corresponding to each said pair of input lines, said respective column driver for each pair of input lines being identified by the same ordinal number as said pair of input lines with which it corresponds and responding to each successive bit of said first through Mth digital input signals for applying respective drive voltages on the input line identified by the corresponding ordinal number and on the input line identified by the ordinal number M higher, the values of said respective drive voltages so applied depending on whether said sign bit flag signal has its first or its second value; and
  • a plurality of weighting capacitors having respective first plates and second plates, each of said weighting capacitors having its first plate connected to one of said first through 2M.sup.th input lines and having its second plate connected to one of said first through N.sup.th output lines thereby to provide one of said capacitive elements.
  • 13. A system as set forth in claim 12 wherein each i.sup.th one of said first through N.sup.th column drivers respectively comprises:
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE at the same time as said sign bit flag signal has its second value, for placing a first prescribed voltage level on said i.sup.th input line;
  • means responding to each successive bit of said i.sup.th digital input signal that is a ZERO at the same time as said sign bit flag signal has its second value, for placing a second prescribed voltage level on said (M+i).sup.th input line;
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE at the same time as said sign bit flag signal has its first value, for placing said second prescribed voltage level on said i.sup.th input line; and
  • means responding to each successive bit of said i.sup.th digital input signal that is a ZERO at the same time as said sign bit flag signal has its first value, for placing said first prescribed voltage level on said (M+i).sup.th input line.
  • 14. A system as set forth in claim 12 wherein each i.sup.th one of said first through N.sup.th column drivers respectively comprises:
  • means respectively responding to each successive bit of said i.sup.th digital input signal that is a ONE at the same time as said sign bit flag signal has its second value, for placing a first prescribed voltage level on said i.sup.th input line and placing a second prescribed voltage level on said (M+i).sup.th input line;
  • means respectively responding to each successive bit of said i.sup.th digital input signal that is a ONE at the same time as said sign bit flag signal has its first value, for placing said second prescribed voltage level on said i.sup.th input line and placing said first prescribed voltage level on said (M+i).sup.th input line; and
  • means responding to each successive bit of said i.sup.th digital input signal that is a ZERO for placing a third prescribed voltage level, which is midway between said first and second prescribed voltage levels, on said i.sup.th and (M+i).sup.th input lines.
  • 15. A system as set forth in claim 1 wherein said means for combining concurrently generated summands identified by the same ordinal numbers to generate first through N.sup.th partial weighted summations comprises
  • N pairs of output lines, said output lines identified by respective ones of consecutive ordinal numbers first through 2N.sup.th, each pair of output lines including output lines identified by ordinal numbers N apart, each pair of output lines being identified by the same ordinal number as the lower of the ordinal numbers identifying the output lines in said pair, and each pair of output lines arranged for receiving respective quantities of electric charge the difference between which is a respective one of the manifestions of said first through N.sup.th summands; and wherein said means for N-fold weighting of each successive bit of said first through Mth digital input signals to generate respective first through N.sup.th summands comprises:
  • M pairs of input lines, said input lines identified by respective ones of consecutive ordinal numbers first through 2Mth, each pair including input lines identified by ordinal numbers M apart and each pair being identified by the same ordinal number as the lower of the ordinal numbers identifying the input lines in said pair;
  • a respective column driver corresponding to each said pair of input lines, said respective column driver for each pair of input lines being identified by the same ordinal number as said pair of input lines with which it corresponds and responding to each successive bit of said first through Mth digital input signals for applying respective drive voltages on the input line identified by the corresponding ordinal number and on the input line identified by the ordinal number M higher; and
  • a plurality of weighting capacitors having respective first plates and second plates, each of said weighting capacitors having its first plate connected to one of said first through 2Mth input lines and having its second plate connected to one of said first through 2Nth output lines thereby to provide one of said capacitive elements; and wherein said means for sensing each of said first through N.sup.th partial summations, as manifested in terms of at least one respective quantity of electric charge, to generate first through N.sup.th analog signals respectively, comprises
  • a plurality N in number of differential charge sensing amplifiers identified by respective ones of consecutive ordinal numbers first through N.sup.th, each of said first through N.sup.th charge sensing means connected for sensing the differential charge between said output line identified by the same ordinal number it is and said output line identified by the ordinal number N higher, each of said first through N.sup.th charge sensing means responding to generate a respective result of performing N weighted summations of said first through M.sup.th single-bit digital input signals.
  • 16. A system as set forth in claim 15 wherein said means for digitally accumulating said first through N.sup.th digital partial weighted summations to generate first through N.sup.th digital final weighted summations is connected to accumulate during each sign bit slice once in opposite sense to respective single accumulations during all other bit slices.
  • 17. A neural net layer for generating first through N.sup.th digital axonal responses to first through Nth digital synapse input signals, N being an integer greater than one and the ordinal numbers first through Nth being consecutive, said system comprising:
  • means for supplying said first through Nth digital input signals in bit-serial form, each bit being expressed in terms of respective voltage levels for its being a ZERO or for its being a ONE;
  • means for N-fold weighting of each successive bit of said first through Nth digital input signals to generate respective first through N.sup.th forward-propagated summands as respective quantities of charge, which means is of a type including a network of capacitive elements and means for connecting them so their capacitances determine the weightings respectively applied to voltage levels expressing current bits of said first through Mth digital input signals when generating said first through N.sup.th summands in accordance with Coulomb's Law;
  • means for combining concurrently generated forward-propagated summands identified by the same ordinal numbers to generate first through N.sup.th forward-propagated partial weighted summations as respective quantities of charge;
  • means for sensing each of said first through N.sup.th forward-propagated partial summations as respective quantities of charge to generate first through N.sup.th forward-propagated analog signals respectively;
  • means for digitizing said first through N.sup.th forward-propagated analog signals to generate first through N.sup.th digital forward-propagated partial weighted summations respectively;
  • means for digitally accumulating said first through N.sup.th digital forward-propagated partial weighted summations to generate first through N.sup.th digital forward-propagated final weighted summations; and
  • means for digitally non-linearly processing said first through N.sup.th digital forward-propagated final weighted summations each with a sigmoidal system function to generate first through N.sup.th digital axonal responses respectively.
  • 18. A neural net layer as set forth in claim 17 wherein said means for digitally non-linearly processing each of said first through N.sup.th digital final weighted summations with a sigmoidal system function to generate a respective digital axonal response consists of, for each j.sup.th one of said first through N.sup.th digital final weighted summations, a respective j.sup.th digital non-linear processor that comprises:
  • means for determining the absolute value of said j.sup.th weighted summation result in digital form;
  • a j.sup.th window comparator for determining into which of a plurality of amplitude ranges falls said absolute value of said j.sup.th weighted summation result in digital form;
  • means for selecting a j.sup.th digital intercept value in accordance with the range into which said absolute value of said j.sup.th weighted summation result in digital form is determined by said j.sup.th window comparator to fall;
  • means for selecting a j.sup.th digital slope value in accordance with the range into which said absolute value of said j.sup.th weighted summation result in digital form is determined by said j.sup.th window comparator to fall;
  • means for multiplying said absolute value of said j.sup.th weighted summation result in digital form by said selected digital slope value to generate a j.sup.th digital product;
  • means for adding said j.sup.th digital product and said j.sup.th digital intercept value to generate an absolute value representation of said j.sup.th digital axonal response; and
  • means for determining the polarity of said j.sup.th weighted summation result in digital form and assigning the same polarity to said j.sup.th absolute value representation of said digital axonal response, thereby to generate said j.sup.th digital axonal response.
  • 19. A plurality of neural net layers, each as set forth in claim 17, respectively identified by consecutive ordinal numbers zeroeth through L.sup.th, L being an integer larger than zero, the first through N.sup.th digital axonal responses of each of said neural net layers except the zeroeth determining respectively the first through N.sup.th digital input signals in bit-serial form of the one of said neural net layers identified by the next lower ordinal number.
  • 20. A system as set forth in claim 17 wherein said capacitive elements in said means for N-fold weighting comprise a plurality N.sup.2 in number of sets of digital capacitors, each of said sets of digital capacitors corresponding to a different combination of one of said first through Nth digital input signals and of one of said first through N.sup.th summands; wherein a memory is included in said system, said memory having a plurality N.sup.2 in number of word storage elements for temporarily storing respective binary codes written thereinto, at least certain ones of which binary codes are susceptible of being re-written from time to time; wherein the digital capacitors in each said set of digital capacitors have their values programmed by a binary code stored in a respective word storage element of said memory; wherein said neural net layer is of a type capable of being trained during a training period of time separate from a succeeding period of time of normal operation; and wherein said neural net layer includes training apparatus for rewriting during said training periods the respective binary codes temporarily stored in selected ones of said plurality of word storage elements, thereby adjusting the N-fold weighting provided by said N.sup.2 sets of digital capacitors.
  • 21. A system as set forth in claim 20 wherein said training apparatus comprises for each j.sup.th one of said first through N.sup.th axonal responses:
  • means for determining the slope of said sigmoidal system function associated with said j.sup.th axonal response for a prescribed set of said first through Mth digital input signals;
  • means for supplying a respective j.sup.th input error signal in bit-serial form to said neural net layer;
  • a j.sup.th digital multiplier for multiplying said j.sup.th input error signal supplied in bit-serial form to said neural net layer, by the slope of said sigmoidal system function associated with said j.sup.th axonal response for a prescribed set of said first through Mth digital input signals, thereby to generate a j.sup.th digital product in bit-serial form; and
  • means for back-propagating the successive bits of said j.sup.th digital product in bit-serial form through said N.sup.2 sets of digital capacitors via routes similar to those traversed by said j.sup.th summands.
  • 22. A system as set forth in claim 21 wherein said training apparatus comprises for each i.sup.th one of said first through Mth digital input signals:
  • means for combining bits of said j.sup.th digital product in bit-serial form concurrently back-propagated through said N.sup.2 sets of digital capacitors via routes similar to those traversed by said j.sup.th summands to generate i.sup.th back-propagated partial summation results as respective quantities of charge;
  • means for sensing said i.sup.th back-propagated partial summations as respective quantities of charge to generate i.sup.th back-propagated analog signals;
  • means for digitizing said i.sup.th back-propagated analog signals to generate i.sup.th digital back-propagated partial weighted summations;
  • means for digitally accumulating said i.sup.th digital back-propagated partial weighted summations to generate an i.sup.th digital back-propagated final weighted summation; and
  • means for converting said i.sup.th digital back-propagated final weighted summation to bit-serial form, to be available as an i.sup.th output error signal.
  • 23. A plurality of neural net layers, each as set forth in claim 22, respectively identified by consecutive ordinal numbers zeroeth through L.sup.th, L being an integer larger than zero, the first through N.sup.th digital axonal responses of each of said neural net layers except the zeroeth determining respectively the first through N.sup.th digital input signals in bit-serial form of the one of said neural net layers identified by the next lower ordinal number, and the first through N.sup.th output error signals of each of said neural net layers except the L.sup.th being supplied as the first through N.sup.th input error signals of the one of said neural net layers identified by the next higher ordinal number.
  • 24. A system as set forth in claim 17 wherein said capacitive elements in said means for N-fold weighting comprise
  • a plurality N.sup.2 in number of sets of digital capacitors considered as being arrayed in N columns and in N rows, each of said N.sup.2 sets of digital capacitors corresponding to the one of said first through Nth digital input signals identified by the same ordinal number as the column it is arrayed in, and each of said N.sup.2 sets of digital capacitors corresponding to the one of said first through N.sup.th summands identified by the same ordinal number as the row it is arrayed in; wherein
  • a plurality N.sup.2 in number of up-down binary counters is included in said system, those said up-down binary counters considered as being arrayed in N columns and in N rows; wherein the digital capacitors in each said set of digital capacitors have their values programmed by the count contained in the one of said up-down binary counters considered as being in the same column and row as that particular said set of digital capacitors; wherein each said up-down binary counter is selectively supplied with pulses to be counted; wherein there are
  • respective means for supplying a respective up/down command to control the direction of counting of each said up-down binary counter, each of which said respective means essentially consists of:
  • a respective two-input exclusive-OR gate supplying said respective up/down command from its output port, a first input port of said respective two-input exclusive-OR gate connecting to a column sign line identified by the same ordinal number as the column in which the up-down binary counter it supplies a respective up/down command is considered to be arrayed, and a second input port of said respective two-input exclusive-OR gate connecting to a row sign line identified by the same ordinal number as the row in which the up-down binary counter it supplies a respective up/down command is considered to be arrayed; wherein said neural net layer is a type capable of being trained during a training period of time separate from a succeeding period of time of normal operation; and wherein said neural net layer includes training apparatus for altering during said training periods the respective binary counts temporarily stored in selected ones of said up-down binary counters, thereby adjusting the N-fold weighting provided by said N.sup.2 sets of digital capacitors.
  • 25. A system as set forth in claim 24 wherein said training apparatus comprises for each j.sup.th one of said first through N.sup.th axonal responses:
  • means for determining the slope of said sigmoidal system function associated with said j.sup.th axonal response for a prescribed set of said first through Mth digital input signals at the beginning of each training cycle;
  • means for supplying a respective j.sup.th input error signal in bit-serial form to said neural net layer during each training cycle;
  • a j.sup.th digital multiplier for multiplying said j.sup.th input error signal supplied in bit-serial form to said net layer, by the slope of said sigmoidal system function associated with said j.sup.th axonal response for a prescribed set of said first through Mth digital input signals, thereby to generate a j.sup.th digital product in bit-serial form during each training cycle;
  • means for back-propagating the successive bits of said j.sup.th digital product in bit-serial form through said N.sup.2 sets of digital capacitors via routes similar to those tranversed by said j.sup.th summands; and
  • mean for applying the sign bit of said j.sup.th digital product to said j.sup.th row sign line during each during each training cycle.
  • 26. A system as set forth in claim 25 wherein said training apparatus comprises for each j.sup.th one of said first through Mth digital input signals:
  • means for combining bits of said j.sup.th digital product in bit-serial form concurrently back-propagated through said N.sup.2 sets of digital capacitors via routes similar to those traversed by said j.sup.th summands to generate i.sup.th back-propagated partial summation results during each training cycle as respective quantities of charge;
  • means for sensing said i.sup.th back-propagated partial summations as respective quantities of change to generate i.sup.th back-propagated analog signals during each training cycle;
  • means for digitizing said i.sup.th back-propagated analog signals to generate i.sup.th digital back-propagated partial weighted summations during each training cycle;
  • means for digitally accumulating said i.sup.th digital back-propagated partial weighted summations to generate an i.sup.th digital back-propagated final weighted summation during each training cycle;
  • means for determining the sign bit of said i.sup.th digital input signal in said prescribed set of said first through Mth digital input signals at the beginning of each training cycle;
  • means for applying the sign bit of said i.sup.th digital input signal in said prescribed set of said first through Mth digital input signals to said i.sup.th column sign line throughout the remainder of each training cycle; and
  • means for converting said i.sup.th digital back-propagated final weighted summation to bit-serial form, to be available as an i.sup.th output eror signal.
  • 27. A pluality of neural net layers, each as set forth in claim 26, respectively identified by consecutive ordinal numbers zeroeth through L.sup.th, L being an integer larger than zero, the first through N.sup.th digital axonal response of each of said neural net layers except the zeroeth determining respectively the first through N.sup.th digital input signals in bit-serial form of the one of said neural net layers identified by the next lower ordinal numbers, and the first through N.sup.th output error signals of each said neural net layers except the L.sup.th being supplied as the first through N.sup.th input error signals of the one of said neural net layers identified by the next higher ordinal number.
  • 28. A system for performing N weighted summations of single-bit digital input signals, M in number, identified by respective ones of consecutive ordinal numbers first through Mth, N being an integer greater than zero and M being an integer greater than one, said system comprising:
  • means for supplying in parallel, said first through Mth signal-bit digital input signals;
  • M pairs of input lines, said input lines identified by respective ones of consecutive ordinal numbers first through 2Mth, each pair including input lines identified by ordinal numbers M apart and each pair being identified by the same ordinal number as the lower of the ordinal numbers identifying the input lines in said pair;
  • a respective column driver corresponding to each said pair of input lines, said respective column driver for each pair of input lines being identified by the same ordinal number as said pair of input lines with which it corresponds and responding to the one of said first through Mth signal-bit digital input signals identified by the corresponding ordinal number for applying respective drive voltage on the input line identified by the corresponding ordinal number and on the input line identified by the ordinal number M higher;
  • N output lines identified by respective ones of consecutive ordinal numbers first through Nth;
  • a plurality of weighting capacitors having respective first plates and second plates, each of said weighting capacitors having its first plate connected to one of said first through 2Mth input lines and having its second plate connected to one of said first through Nth output lines;
  • N output-line-charge sensing means identified by respective ones of consecutive ordinal numbers first through N.sup.th, each said output-line-charge sensing means connected for sensing the charge on said output line identified by the same ordinal number it is and generating respective results of performing first through N.sup.th weighted summations of said first through M.sup.th single-bit digital input signals; and
  • analog-to-digital converting means for digitizing the respective results of performing first through N.sup.th weighted summations of said first through M.sup.th signal-bit digital input signals generated by said first through N.sup.th output-line-charge sensing means, to obtain first through N.sup.th digital weighted summations of said first through M.sup.th signal-bit digital input signals.
  • 29. A system as set forth in claim 28 wherein each i.sup.th one of said first through N.sup.th column respectively comprises:
  • means responding to said i.sup.th signal-bit digital input signal being a ONE, for placing a first prescribed voltage level on the i.sup.th line and placing a second prescribed voltage level on the (M+i).sup.th input line; and
  • means responding to said i.sup.th signal-bit digital input signal being ZERO, for placing said second prescribed voltage level on the input line identified by the corresponding ordinal number and placing said first prescribed voltage level on the input line identified by the ordinal number M higher.
  • 30. A system as set forth in claim 29 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 31. A system as set forth in claim 29 in combination with:
  • means for bit-slicing first through Mth P-bit signals, P being an integer more than one, to obtain respective sets of P successive first through Mth single-bit digital input signals for application to said system as set forth in claim 28, whereby the P successive first through N.sup.th digitized results of weighted summations arising from a respective set of P successive first through M.sup.th signal-bit input signals are partial weighted summations; and
  • means for accumulting the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively.
  • 32. A system as set forth in claim 31 wherein said first through Mth P-bit signals comprise digital arithmetic words that are two's complement signed numbers each having a respective sign bit and at least one other bit; and wherein said means for accumulating the P successive digitized results of each of the first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively, accumulates the partial weighted summation generated from the sign bits of said first through Mth P-bit signals once in the opposite sense to respective single accumulations of the partial weighted summations respectively generated from the other bits of said first through Mth P-bit signals.
  • 33. A neural net layer comprising the system set forth in claim 29 in combination with:
  • means for bit-slicing first through Mth P-bit signals, P being an integer more than one, to obtain respective sets of P successive first through Mth single-bit digital input signals for application to said system as set forth in claim 28, whereby the P successive first through N.sup.th digitized results of weighted summations arising from a respective set of P successive first through M.sup.th signal-bit digital input signals are partial weighted summations;
  • means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively; and
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th final weighted summations of said first through M.sup.th signal-bit digital input signals to generate a respective digital axonal response.
  • 34. A system as set forth in claim 28 wherein each i.sup.th one of said first through N.sup.th column drivers respectively comprises:
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE, for placing a first prescribed voltage level on the i.sup.th input line placing a second prescribed voltage level on the (M+i).sup.th input line; and
  • means responding to each successive bit of said i.sup.th digital input signal that is ZERO, for placing a third voltage level both on the i.sup.th input line on the (M+i).sup.th input line, said third voltage level being midway between said first and second prescribed voltage levels.
  • 35. A system as set forth in claim 34 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 36. A system as set forth in claim 34 in combination with:
  • means for bit-slicing first through Mth P-bit signals, P being an integer more than one, to obtain respectively sets of P successive first through Mth single-bit digital input signals for application to said system as set forth in claim 33, whereby the P successive first through N.sup.th digitized results of weighted summations arising from a respective set of P successive first through M.sup.th single-bit digital input signals are partial weighted summations; and
  • means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively.
  • 37. A system as set forth in claim 36 wherein said first through Mth P-bit signals comprise digital arithmetic words that are two's complement signed numbers each having a respective sign bit and at least one other bit; and wherein said means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively, accumulates the partial weighted summation generated from the sign bits of said first through Mth P-bit signals once in the opposite sense to respective single accumulations of the partial weighted summations respectively generated from the other bits of said first through Mth P-bit signals.
  • 38. A system as set forth in claim 34 in combination with:
  • means for bit-slicing first through Mth P-bit signals, P being an integer more than one, to obtain respective sets of P successive first through Mth single-bit digital input signals for application to said system as set forth in claim 34, whereby the P successive first through N.sup.th digital results of weighted summations arising from a respective set of P successive first through M.sup.th single-bit digital input signals are partial weighted summations;
  • means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively; and
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th final weighted summations of said first through M.sup.th single-bit digital signals to generate a respective digital axonal response.
  • 39. A system as set forth in claim 28 including:
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th digitized results of performing a weighted summation of said first through M.sup.th single-bit digital input signals to generate a respective digital axonal response.
  • 40. A system as set forth in claim 39 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 41. A system for performing N weighted summations of single-bit digital input signals, M in number, identified by respective ones of consecutive ordinal numbers first through M.sup.th, N being an integer greater than zero and M being an integer greater than one, said system comprising:
  • N pairs of output lines, said output lines identified by respective ones of consecutive ordinal numbers first through 2N.sup.th, each pair including output lines identified by ordinal numbers N apart and each pair being identified by the same ordinal number as the lower of the ordinal numbers identifying the output lines in said pair;
  • M input lines, said input lines identified by respective ones of consecutive ordinal numbers first through N.sup.th ;
  • respective means respectively responding to said first through M.sup.th single-bit digital input signals being ZERO, for placing a first prescribed voltage level on the input line identified by the corresponding ordinal number;
  • respective means respectively responding to said first through M.sup.h single-bit digital input signals being ONE, for placing a second voltage level on the input line identified by the corresponding ordinal number;
  • a plurality of weighting capacitors having respective first plates and second plates, each of said weighting capacitors having its first plate connected to one of said first through 2N.sup.th output lines and having its second plate connected to one of said first through M.sup.th input lines;
  • N charge sensing means identified by respective ones of consecutive ordinal numbers first through N.sup.th, each of said first through N.sup.th charge sensing means connected for sensing the differential charge between said output line identified by the same ordinal number it is and said output line identified by the ordinal number N higher, each of said first through N.sup.th charge sensing means responding to generate a respective result of performing N weighted summations of said first through M.sup.th single-bit digital input signals;
  • means for periodically zeroing said system during which periods of zeroing said system said first prescribed voltage level is applied to the first and second plates of each of said plurality of weighting capacitors; and
  • analog-to-digital converting means for digitizing the respective results of performing first through N.sup.th weighted summations of said first through M.sup.th single-bit digital input signals generated by said first through N.sup.th charge sensing means, to obtain first through N.sup.th digitized weighted summations of said first through M.sup.th signal-bit digital input signals.
  • 42. A system as set forth in claim 41 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 43. A system as set forth in claim 41 in combination with:
  • means for bit-slicing first through Mth P-bit signals, P being an integer more than one, to obtain respective sets of P successive first through Mth single-bit digital input signals for application to said system as set forth in claim 41, whereby the P successive first through N.sup.th digitized results of weighted summations arising from a respective set of P successive first through M.sup.th single-bit digital signals are partial weighted summations; and
  • means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively.
  • 44. A system as set forth in claim 43 wherein said first through Mth P-bit signals comprise digital arithmetic words that are two's complement signed numbers each having a respective sign bit and at least one other bit; and wherein said means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively, accumulates the partial weighted summations generated from the sign bits of said first through Mth P-bit signals once in the opposite sense to respective single accumulations of the partial weighted summations respectively generated from the other bits of said first through Mth P-bit signals.
  • 45. A system as set forth in claim 41 in combination with:
  • means for bit-slicing first through Mth P-bit signals, P being an integer more than one, to obtain respective sets of P successive first through Mth single-bit digital input signals for application to said system as set forth in claim 41, whereby the P successive first through N.sup.th digitized results of weighted summations arising from a respective set of P successive first through M.sup.th single-bit digital input signals are partial weighted summations;
  • means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively; and
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th final weighted summations of said first through M.sup.th single-bit digital input signals to generate a respective digital axonal response.
  • 46. A system as set forth in claim 41 including:
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th digitized results of performing a weighted summation of said first through M.sup.th single-bit digital input signals to generate a respective digital axonal response.
  • 47. A system as set forth in claim 46 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 48. A system for performing N weighted summations of P-bit digital input signals, M in number, identified by respective ones of consecutive ordinal numbers first through Mth, N being an integer greater than zero, M being an integer greater than one, and P being an integer greater than one that may differ from M, said system comprising:
  • means for bit-slicing said first through M.sup.th P-bit signals, P being an integer more than one, to obtain respective sets of P successive first through M.sup.th single-bit signals
  • means for generating a sign bit flag signal that has a first value only during sign bit slices and that has a second value during all other bit slices;
  • M pairs of input lines, said input lines identified by respective ones of consecutive ordinal numbers first through 2Mth, each pair including input lines identified by ordinal numbers M apart and each pair being identified by the same ordinal numbers as the lower of the ordinal numbers identifying the input lines in said pair;
  • a respective column driver corresponding to each said pair of input lines, said respective column driver for each pair of input lines being identified by the same ordinal number as said pair of input lines with which it corresponds and responding to each successive bit of said first through Mth digital input signals for applying respective drive voltages on the input line identified by the corresponding ordinal number and on the input line identified by the ordinal number M higher, the values of said respective drive voltages so applied depending on whether said sign bit flag signal has its first or its second value;
  • N output lines identified by respective ones of consecutive ordinal numbers first through Nth;
  • a plurality of weighting capacitors having respective first plates and second plates, each of said weighting capacitors having its first plate connected to one of said first through 2Mth input lines and having its second plate connected to one of said first through Nth output lines;
  • N output-line-charge sensing means identified by respective ones of consecutive ordinal numbers first through N.sup.th, each said output-line-charge sensing means connected for sensing the charge on said output line identified by the same ordinal number it is and generating respective results of performing first through N.sup.th weighted summations of said first through M.sup.th single-bit digital input signals; and
  • analog-to-digital converting means for digitizing the respective results of performing first through N.sup.th weighted summations of said first through M.sup.th single-bit digital input signals generated by said first through N.sup.th output-line-charge sensing means, to obtain first through N.sup.th digitized partial weighted summations of said first through M.sup.th single-bit digital input signals; and
  • means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively.
  • 49. A system as set forth in claim 48 wherein each i.sup.th one of said frst through N.sup.th column drivers respectively comprises:
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE at the same time as said sign bit flag signal has its second value, for placing a first prescribed voltage level on the i.sup.th input line;
  • means responding to each successive bit of said i.sup.th digital input signal that is a ZERO at the same time as said sign bit flag signal has its second value, for placing a second prescribed voltage level on the (M+i).sup.th input line;
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE at the same time as said sign bit flag signal has its first value, for placing said second prescribed voltage level on the i.sup.th input line; and
  • means responding to each successive bit of said i.sup.th digital input signal that is a ZERO at the same time as said sign bit flag has its first value, for placing said first prescribed voltage level on the (M+i).sup.th input line.
  • 50. A system as set forth in claim 49 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 51. A neural net layer comprising the system set forth in claim 49 in combination with:
  • means for digitally processing, with a signoidal system characteristic, each of said first through N.sup.th final weighted summations of said first through M.sup.th single-bit digital input signals to generate a respective digital axonal response.
  • 52. A system as set forth in claim 48 wherein each i.sup.th one of said first through N.sup.th column drivers respectively comprises:
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE at the same time as said sign bit flag signal has its second value, for placing a first prescribed voltage level on the i.sup.th line and placing a second prescribed voltage level on the (M+i).sup.th input line;
  • means responding to each successive bit of said i.sup.th digital input signal that is a ONE at the same time as said sign bit flag signal has its first value, for placing said second prescribed voltage level on the i.sup.th input line and placing said first, prescribed voltage level on the (M+i).sup.th input line; and
  • means responding to each successive bit of said i.sup.th digital input signal that is a ZERO for placing a third prescribed voltage level, which is midway between said first and second prescribed voltage levels, on the i.sup.th input line and on the (M+i).sup.th input line.
  • 53. A system as set forth in claim 52 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 54. A system as set forthn in claim 52 in combination with:
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th final weighted summations of said first through M.sup.th single-bit digital input signals to generate a respective digital axonal response.
  • 55. A system as set forth in claim 48 including:
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th digitized results of performing a weighted summation of said first through M.sup.th single-bit digital input signals to generate a respective digital axonal response.
  • 56. A system as set forth in claim 55 wherein each of said weighting capaciors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 57. A system for performing N weighted summations of single-bit digital input signals, M in number, identified by respective ones of consecutive ordinal numbers first through Mth, N being an integer greater than zero and M being an integer greater than one, said system comprising:
  • means for supplying, in parallel, said first through Mth single-bit digital input signals;
  • M pairs of input lines, said input lines identified by respective ones of consecutive ordinal numbers first through 2Mth, each pair including input lines identified by ordinal numbers M apart and each pair being identified by the same ordinal number as the lower of the ordinal numbers identifying the input lines in said pair;
  • a respective column driver corresponding to each said pair of input lines, said respective column driver for each pair of input lines being identified by the same ordinal number as said pair of input lines with which it corresponds and responding to the one of said first through Mth single-bit digital input signals identified by the corresponding ordinal number for applying respective drive voltages on the input line identified by the corresponding ordinal number and on the input line identified by the ordinal number M higher;
  • N pairs of output lines, said output lines identified by respective ones of consecutive ordinal numbers first through 2N.sup.th, each pair including output lines identified by ordinal numbers N apart and each pair being identified by the same ordinal number as the lower of the ordinal numbers identifying the output lines in said pair;
  • a plurality of weighting capacitors having respective first plates and second plates, each of said weighting capacitors having its first plate connected to one of said first through 2N.sup.th output lines and having its second plate connected to one of said first through 2M.sup.th input lines;
  • N charge sensing means identified by respective ones of consecutive ordinal numbers first through N.sup.th, each of said first through N.sup.th charge sensing means connected for sensing the differential charge between said output line identified by the same ordinal number it is and said output line identified by the ordinal number N higher, each of said first through N.sup.th charge sensing means responding to generate a respective result of performing N weighted summations of said first through M.sup.th single-bit digital input signals;
  • means for periodically zeroing said system during which periods of zeroing said system said first prescribed voltage level is applied to the first and second plates of each of said plurality of weighting capacitors; and
  • analog-to-digital converting means for digitizing the respective results of performing first through N.sup.th weighted summations of said first through M.sup.th single-bit digital input signals generated by said first through N.sup.th output-line-charge sensing means, to obtain first through N.sup.th digitized weighted summations of said first through M.sup.th single-bit digital input signals.
  • 58. A system as set forth in claim 57 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 59. A system as set forth in claim 57 in combination with:
  • means for bit-slicing first through Mth P-bit signals, P being an integer more than one, to obtain respective sets of P successive first through Mth single-bit digital input signal for application to said system as set forth in claim 57, whereby the P successive first through N.sup.th digtized results of weighted summations arising from a respective set of P successive first through M.sup.th single-bit digital input signals are partial weighted summations; and
  • means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively.
  • 60. A system as set forth in claim 59 wherein said first through Mth P-bit signals comprise digital arithmetic words that are two's complement signed numbers each having a respective sign bit and at least one other bit; and wherein said means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively, accumulates the partial weighted summation generated from the sign bits of said first through Mth P-bit signals once in the opposite sense to respective single accumulations of the partial weighted summations respectively generated from the other bits of said first through Mth P-bit signals.
  • 61. A neural net layer comprising the system set forth in claim 57 in combination with:
  • means for bit-slicing first through Mth P-bit signals, P being an integer more than one, to obtain respective sets of P successive first through Mth single-bit digital input signals for application to said system as set forth in claim 57, whereby the P successive first through N.sup.th digitized results of weighted summations arising from a respective set of P successive first through M.sup.th single-bit digital input signals are partial weighted summations;
  • means for accumulating the P successive digitized results of each of said first through N.sup.th partial weighted summations respectively, to generate first through N.sup.th final weighted summations respectively; and
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th final weighted summations of said first through M.sup.th single-bit digital input signals to generate a respective digital axonal response.
  • 62. A neural net layer comprising the system set forth in claim 61 in combination with:
  • means for digitally processing, with a sigmoidal system characteristic, each of said first through N.sup.th digitized results of performing a weighted summation of said first through M.sup.th single-bit digital input signals to generate a respective digital axonal response.
  • 63. A system as set forth in claim 62 wherein each of said weighting capacitors is of a type that has a capacitance that is adjustable in accordance with a binary number programming signal.
  • 64. In a neural net comprising a plurality of neural net layers identified by respective ones of consecutive ordinal numbers zeroeth through L.sup.th, the digital axonal output signals of each of said neural net layers except the zeroeth being supplied as respective ones of the digital synapse input signals of said neural net layer identified by next lower ordinal number, each of said neural net layers including respective means for bit-slicing respective digital synapse input signals supplied thereto, each of said neural net layers having for each of its digital axonal output signals a respective capacitive weighting network for generating an analog electric signal descriptive of a respective weighted summation of each bit-slice of its digital synapse input signals, the improvement wherein each neural net layer further comprises:
  • respective means corresponding to each said capacitive weighting network in that said neural net layer, for digitizing each successive said analog electric signal generated thereby to generate a corresponding digital response to each bit-slice of the digital synapse input signals to that said neural net layer;
  • a respective digital accumulator corresponding to each said capacitive weighting network in that said neural net layer for accumulating with appropriate weighting the corresponding digital responses to the bit-slices of the digital synapse input signals to that said neural net layer, said respective digital accumulator being reset to zero accumulation after the bit slices in each successive word of the digital synapse input signals are all accumulated to generate a respective final accumulation; and
  • respective means corresponding to each digital accumulator for digitally non-linearly processing its respective final accumulation to generate a respective one of said digital axonal output signals of that said neural net layer.
  • 65. An improvement as set forth in claim 64 wherein each said respective means corresponding to each digital accumulator for digitally non-linearly processing its respective final accumulation to generate a respective one of said digital axonal output signals of that said neural net layer comprises:
  • means for determining the absolute value of the final accumulation in digital form of the corresponding digital accumulator;
  • a respective window comparator for determining into which of a plurality of amplitude ranges falls said absolute value of the final accumulation in digital form of the corresponding digital accumulator;
  • means for selecting a respective digital intercept value in accordance with the range into which said absolute value of the final accumulation in digital form of the corresponding digital accumulator is determined by its said respective window comparator to fall;
  • means for selecting a respective digital slope value in accordance with the range into which said absolute value of the final accumulation in digital form of the corresponding digital accumulator is determined by its said respective window comparator to fall;
  • means for multiplying said absolute value of of the final accumulation in digital form of the corresponding digital accumulator by said respective selected digital slope value to generate a respective digital product;
  • means for adding said respective digital product and said respective digital intercept value to generate an absolute value representation of said respective one of said digital axonal responses; and
  • means for determining the polarity of its final accumulation in digital form and assigining the same polarity to said absolute value representation of said respective one of said digital axonal responses, thereby to generate said respective one of said digital axonal responses.
  • 66. In a neural net capable of being trained during a training period of time separate from a succeeding period of time of normal operation, said neural net comprising a plurality of neural net layers identified by respective ones of consecutive ordinal numbers zeroeth through L.sup.th, the digital axonal output signals of each of said neural net layers except the zeroeth being supplied as respective ones of the digital synapse input signals of said neural net layer identified by next lower ordinal number, each of said neural net layers including respective means for bit-slicing respective digital synapse input signals supplied thereto, each of said neural net layers having for each of its digital axonal output signals a respective capacitive weighting network including a respective set of digital weighting capacitors for each of its respective digital synapse input signals supplied thereto, each said respective capacitive weighting network for generating an analog electric signal descriptive of a respective weighted summation of each bit-slice of its digital synapse input signals, each of said neural net layers including a respective memory for temporarily storing respective binary codes written into word storage elements thereof for programming the values of the digital capacitors in each said set of digital capacitors, said neural net including training apparatus for re-writing during said training periods the respective binary codes temporarily stored in selected ones of said word storage elements of the respective memories of said neural net layers thereby to adjust the weighting provided by said sets of digital capacitors in their capacitive weighting networks, the improvement--wherein each neural net layer further comprises:
  • respective means corresponding to each said capacitive weighting network in that said neural net layer, for digitizing each successive said analog electric signal generated thereby to generate a corresponding digital response to each bit-slice of the digital synapse input signals to that said neural net layer;
  • a respective digital accumulator corresponding to each said capacitive weighting network in that said neural net layer for accumulating with appropriate weighting the corresponding digital responses to the bit-slices of the digital synapse input signals to that said neural net layer, said respective digital accumulator being reset to zero accumulation after the bit slices in each successive word of the digital synapse input signals are all accumulated to generate a respective final accumulation; and
  • respective means corresponding to each digital accumulator for digitally non-linearly processing its respective final accumulation to generate a respective one of said digital axonal output signals of that said neural net layer--wherein each said respective means corresponding to each digital accumulator for digitally non-linearly processing its respective final accumulation to generate a respective one of said digital axonal output signals of that said neural net layer comprises:
  • means for determining the absolute value of the final accumulation in digital form of the corresponding digital accumulator;
  • a respective window comparator for determining into which of a plurality of amplitude ranges falls said absolute value of the final accumulation in digital form of the corresponding digital accumulator;
  • means for selecting a respective digital intercept value in accordance with the range into which said absolute value of the final accumulation in digital form of the corresponding digital accumulator is determined by its said respective window comparator to fall;
  • means for selecting a respective digital slope value in accordance with the range into which said absolute value of the final accumulation in digital form of the corresponding digital accumulator is determined by its said respective window comparator to fall;
  • means for multiplying said absolute value of of the final accumulation in digital form of the corresponding digital accumulator by said respective selected digital slope value to generate a respective digital product;
  • means for adding said respective digital product and said respective digital intercept value to generate an absolute value representation of said respective one of said digital axonal responses; and
  • means for determining the polarity of its final accumulation in digital form and assigining the same polarity to said absolute value representation of said respective one of said digital axonal responses, thereby to generate said respective one of said digital axonal responses--and wherein said training apparatus comprises for each respective one of said digital axonal responses of each of said zeroeth through L.sup.th neural net layers:
  • respective means for determining the slope of said sigmoidal system function associated with that said digital axonal response, as selected for a prescribed set of digital synapse input signals to said L.sup.th neural net layer;
  • means for supplying a respective input error signal in bit-serial form to said neural net layer generating that respective one of said digital axonal responses;
  • a respective digital multiplier for multiplying said respective input error signal supplied in bit-serial form to said neural net layer generating that respective one of said digital axonal responses, by the slope of said sigmoidal system function associated with that respective one of said digital axonal responses for said prescribed set of digital synapse input signals to L.sup.th neural net layer, thereby to generate a respective digital product in bit-serial form; and
  • means for back-propagating the successive bits of said respective digital product in bit-serial form through said weighting capacitive networks via routes traversed by the signals generating that respective one of said digital axonal responses.
  • 67. A system for performing N weighted summations of digital input signals that are M in number and are identified by respective ones of consecutive ordinal numbers first through Mth, N being an integer greater than zero and M being an integer greater than one, said system comprising:
  • a plurality N in number of weighted summation networks each of similar type operative in the analog regime, connected each for generating a respective analog weighted summation signal responsive to the same plurality M in number of analog signals, said weighted summmation networks identified by respective ones of consecutive ordinal numbers first through N.sup.th, the respective analog weighted summation signal of each weighted summation network being identified by the same ordinal number as that said weighted summation network generating it;
  • means for supplying said first through Mth digital input signals in bit-serial form, with B bits per successive arithmetic word and with their respective arithmetic words in temporal alignment with each other, B being an integer more than one, each bit being expressed in terms of respective voltage levels for its being a ZERO or for its being a ONE, applied to each of said plurality of weighted summation networks as respective one of said plurality M in number of analog signals;
  • means for digitizing the B successive sets of said first through N.sup.th analog signals generated by each temporally aligned set of arithmetic words, to generate first through N.sup.th digital partial weighted summations respectively; and
  • means for digitally accumulating said first through N.sup.th digital partial weighted summations with appropriate respective weighting to generate first through N.sup.th digital final weighted summations.
  • 68. A neural net layer comprising a system as set forth in claim 67 for performing N weighted summations of digital input signals that are M in number, in combination with:
  • means for digitally non-linearly processing each of said first through N.sup.th digital final weighted summations with a sigmoidal system function to generate a respective digital axonal response.
US Referenced Citations (10)
Number Name Date Kind
4156284 Engeler May 1979
4807168 Moopenn et al. Feb 1989
4866645 Lish Sep 1989
4873661 Tsividis Oct 1989
4896287 O'Donnell et al. Jan 1990
4903226 Tsividis Feb 1990
4931674 Kub et al. Jun 1990
4941122 Weideman Jul 1990
4942367 Milkovic Jul 1990
4988891 Mashiko Jan 1991
Non-Patent Literature Citations (3)
Entry
Eberhardt et al., "Design of Parallel H051137537 Hardware Neural Network Systems from Custom Analog VLSI Building Block Chips", IJCNN '89, pp. II-183-II-190.
Schwartz et al., "A Programmable Analog Neural Network Chip", IEEE Jour. Solid-State Circuits, vol. 24(3), 1989, pp. 688-697.
Kub et al., "Programmable Analog Vector-Matrix Multipliers", IEEE Jour. Solid-State Circuits, vol. 25(1), 1990, pp. 207-214.