The present disclosure relates generally to semiconductor memory and methods, and more particularly, to apparatuses, systems, and methods for bit string operations in memory.
Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic systems. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data (e.g., host data, error data, etc.) and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), synchronous dynamic random access memory (SDRAM), and thyristor random access memory (TRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), among others.
Memory devices may be coupled to a host (e.g., a host computing device) to store data, commands, and/or instructions for use by the host while the computer or electronic system is operating. For example, data, commands, and/or instructions can be transferred between the host and the memory device(s) during operation of a computing or other electronic system.
Systems, apparatuses, and methods related to bit string operations in memory are described. The bit string operations may be performed within a memory array without transferring the bit strings or intermediate results of the operations to circuitry external to the memory array. For instance, sensing circuitry that can include a sense amplifier and a compute component can be coupled to a memory array. A controller can be coupled to the sensing circuitry and can be configured to cause one or more bit strings that are formatted according to a universal number format or a posit format to be transferred from the memory array to the sensing circuitry. The sensing circuitry can perform an arithmetic operation, a logical operation, or both using the one or more bit strings.
Computing systems may perform a wide range of operations that can include various calculations, which can require differing degrees of accuracy. However, computing systems have a finite amount of memory in which to store operands on which calculations are to be performed. In order to facilitate performance of operation on operands stored by a computing system within the constraints imposed by finite memory resources, operands can be stored in particular formats. One such format is referred to as the “floating-point” format, or “float,” for simplicity (e.g., the IEEE 754 floating-point format).
Under the floating-point standard, bit strings (e.g., strings of bits that can represent a number), such as binary number strings, are represented in terms of three sets of integers or sets of bits—a set of bits referred to as a “base,” a set of bits referred to as an “exponent,” and a set of bits referred to as a “mantissa” (or significand). The sets of integers or bits that define the format in which a binary number string is stored may be referred to herein as an “numeric format,” or “format,” for simplicity. For example, the three sets of integers of bits described above (e.g., the base, exponent, and mantissa) that define a floating-point bit string may be referred to as a format (e.g., a first format). As described in more detail below, a posit bit string may include four sets of integers or sets of bits (e.g., a sign, a regime, an exponent, and a mantissa), which may also be referred to as a “numeric format,” or “format,” (e.g., a second format). In addition, under the floating-point standard, two infinities (e.g., +∞ and −∞) and/or two kinds of “NaN” (not-a-number): a quiet NaN and a signaling NaN, may be included in a bit string.
The floating-point standard has been used in computing systems for a number of years and defines arithmetic formats, interchange formats, rounding rules, operations, and exception handling for computation carried out by many computing systems. Arithmetic formats can include binary and/or decimal floating-point data, which can include finite numbers, infinities, and/or special NaN values. Interchange formats can include encodings (e.g., bit strings) that may be used to exchange floating-point data. Rounding rules can include a set of properties that may be satisfied when rounding numbers during arithmetic operations and/or conversion operations. Floating-point operations can include arithmetic operations and/or other computational operations such as trigonometric functions. Exception handling can include indications of exceptional conditions, such as division by zero, overflows, etc.
An alternative format to floating-point is referred to as a “universal number” (unum) format. There are several forms of unum formats—Type I unums, Type II unums, and Type III unums, which can be referred to as “posits” and/or “valids.” Type I unums are a superset of the IEEE 754 standard floating-point format that use a “ubit” at the end of the mantissa to indicate whether a real number is an exact float, or if it lies in the interval between adjacent floats. The sign, exponent, and mantissa bits in a Type I unum take their definition from the IEEE 754 floating-point format, however, the length of the exponent and mantissa fields of Type I unums can vary dramatically, from a single bit to a maximum user-definable length. By taking the sign, exponent, and mantissa bits from the IEEE 754 standard floating-point format, Type I unums can behave similar to floating-point numbers, however, the variable bit length exhibited in the exponent and fraction bits of the Type I unum can require additional management in comparison to floats.
Type II unums are generally incompatible with floats, however, Type II unums can permit a clean, mathematical design based on projected real numbers. A Type II unum can include n bits and can be described in terms of a “u-lattice” in which quadrants of a circular projection are populated with an ordered set of 2n-3−1 real numbers. The values of the Type II unum can be reflected about an axis bisecting the circular projection such that positive values lie in an upper right quadrant of the circular projection, while their negative counterparts lie in an upper left quadrant of the circular projection. The lower half of the circular projection representing a Type II unum can include reciprocals of the values that lie in the upper half of the circular projection. Type II unums generally rely on a look-up table for most operations. As a result, the size of the look-up table can limit the efficacy of Type II unums in some circumstances. However, Type II unums can provide improved computational functionality in comparison with floats under some conditions.
The Type III unum format is referred to herein as a “posit format” or, for simplicity, a “posit.” In contrast to floating-point bit strings, posits can, under certain conditions, allow for higher precision (e.g., a broader dynamic range, higher resolution, and/or higher accuracy) than floating-point numbers with the same bit width. This can allow for operations performed by a computing system to be performed at a higher rate (e.g., faster) when using posits than with floating-point numbers, which, in turn, can improve the performance of the computing system by, for example, reducing a number of clock cycles used in performing operations thereby reducing processing time and/or power consumed in performing such operations. In addition, the use of posits in computing systems can allow for higher accuracy and/or precision in computations than floating-point numbers, which can further improve the functioning of a computing system in comparison to some approaches (e.g., approaches which rely upon floating-point format bit strings).
Posits can be highly variable in precision and accuracy based on the total quantity of bits and/or the quantity of sets of integers or sets of bits included in the posit. In addition, posits can generate a wide dynamic range. The accuracy, precision, and/or the dynamic range of a posit can be greater than that of a float, or other numerical formats, under certain conditions, as described in more detail herein. The variable accuracy, precision, and/or dynamic range of a posit can be manipulated, for example, based on an application in which a posit will be used. In addition, posits can reduce or eliminate the overflow, underflow, NaN, and/or other corner cases that are associated with floats and other numerical formats. Further, the use of posits can allow for a numerical value (e.g., a number) to be represented using fewer bits in comparison to floats or other numerical formats.
These features can, in some embodiments, allow for posits to be highly reconfigurable, which can provide improved application performance in comparison to approaches that rely on floats or other numerical formats. In addition, these features of posits can provide improved performance in machine learning applications in comparison to floats or other numerical formats. For example, posits can be used in machine learning applications, in which computational performance is paramount, to train a network (e.g., a neural network) with a same or greater accuracy and/or precision than floats or other numerical formats using fewer bits than floats or other numerical formats. In addition, inference operations in machine learning contexts can be achieved using posits with fewer bits (e.g., a smaller bit width) than floats or other numerical formats. By using fewer bits to achieve a same or enhanced outcome in comparison to floats or other numerical formats, the use of posits can therefore reduce an amount of time in performing operations and/or reduce the amount of memory space required in applications, which can improve the overall function of a computing system in which posits are employed.
Embodiments herein are directed to hardware circuitry (e.g., control circuitry) configured to perform various operations on bit strings to improve the overall functioning of a computing device. For example, embodiments herein are directed to hardware circuitry that is configured to perform conversion operations to convert a format of a bit string from a first format (e.g., a floating-point format) that supports arithmetic or logical operations to a first level of precision to a second format (e.g., a universal number format, a posit format, etc.) that supports arithmetic or logical operations to a second level of precision.
In some embodiments, the hardware circuitry can be configured to perform the conversion operations on the bit strings such that the resultant bit strings (e.g., the bit strings having the second format) each have a same bit string shape. As used herein, a “bit string shape” generally refers to a number of bits and a number of bits in each bit sub-set, which are described in more detail in connection with
Once the conversion operation(s) have been performed, the hardware circuitry may be configured to transfer the converted bit strings to a non-persistent memory device, such as a dynamic random-access memory (DRAM) device. The converted bit strings can be manipulated within a memory array of the non-persistent memory device as part of performance one or more arithmetic, bitwise, vector, and/or logical operations using the converted bit strings as operands.
For example, sensing circuitry deployed in a DRAM device can be configured to perform one or more arithmetic, bitwise, vector, and/or logical operations using the converted bit strings as operands. As described in more detail herein, the sensing circuitry can include sense amplifiers and compute components that, when operated according to various control signals, can perform such operations on the converted bit strings. In some embodiments, the operations can be performed within the sensing circuitry and/or the memory array without activating (e.g., prior to activating) input/output circuitry coupled to the memory array and/or the sensing circuitry. Accordingly, in some embodiments, the sensing circuitry can be configured to perform the operations using the converted bit strings in without transferring the bit strings out of the memory array and/or the sensing circuitry until the requested operation is completed.
By performing such operations within the memory array of the non-persistent memory device using bit strings that have been covered to the second format, improved performance of the computing system may be realized by allowing for improved accuracy and/or precision in the performed operations, improved speed in performing the operations, and/or a reduced required storage space for bit strings prior to, during, or subsequent to, performance of arithmetic and/or logical operations.
In some embodiments, results (e.g., resultant bit strings) of the operations performed within the memory array can be transferred back to the hardware circuitry and the hardware circuitry can be further operated to convert the results of the operations back to the first format (e.g., to a floating-point format), which can, in turn, be transferred to different circuitry (e.g., a host, a memory device, etc.) of the computing system.
In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and structural changes may be made without departing from the scope of the present disclosure.
As used herein, designators such as “N” “M,” “X,” and “Y,” etc., particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” can include both singular and plural referents, unless the context clearly dictates otherwise. In addition, “a number of,” “at least one,” and “one or more” (e.g., a number of memory banks) can refer to one or more memory banks, whereas a “plurality of” is intended to refer to more than one of such things.
Furthermore, the words “can” and “may” are used throughout this application in a permissive sense (i.e., having the potential to, being able to), not in a mandatory sense (i.e., must). The term “include,” and derivations thereof, means “including, but not limited to.” The terms “coupled” and “coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and/or data, as appropriate to the context. The terms “bit strings,” “data,” and “data values” are used interchangeably herein and can have the same meaning, as appropriate to the context. In addition, the terms “set of bits,” “bit sub-set,” and “portion” (in the context of a portion of bits of a bit string) are used interchangeably herein and can have the same meaning, as appropriate to the context.
The figures herein follow a numbering convention in which the first digit or digits correspond to the figure number and the remaining digits identify an element or component in the figure. Similar elements or components between different figures may be identified by the use of similar digits. For example, 120 may reference element “20” in
The memory device 104 can provide main memory for the computing system 100 or could be used as additional memory or storage throughout the computing system 100. The memory device 104 can include one or more memory arrays 130 (e.g., arrays of memory cells), which can include volatile and/or non-volatile memory cells. The memory array 130 can be a flash array with a NAND architecture, for example. Embodiments are not limited to a particular type of memory device. For instance, the memory device 104 can include RAM, ROM, DRAM, SDRAM, PCRAM, RRAM, and flash memory, among others.
In embodiments in which the memory device 104 includes non-volatile memory, the memory device 104 can include flash memory devices such as NAND or NOR flash memory devices. Embodiments are not so limited, however, and the memory device 104 can include other non-volatile memory devices such as non-volatile random-access memory devices (e.g., NVRAM, ReRAM, FeRAM, MRAM, PCM), “emerging” memory devices such as resistance variable (e.g., 3-D Crosspoint (3D XP)) memory devices, memory devices that include an array of self-selecting memory (SSM) cells, etc., or combinations thereof. Resistance variable memory devices can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, resistance variable non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. In contrast to flash-based memories and resistance variable memories, self-selecting memory cells can include memory cells that have a single chalcogenide material that serves as both the switch and storage element for the memory cell.
As illustrated in
The host 102 can include a system motherboard and/or backplane and can include a memory access device, e.g., a processor (or processing device). One of ordinary skill in the art will appreciate that “a processor” can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc. The system 100 can include separate integrated circuits or both the host 102, the memory device 104, and the memory array 130 can be on the same integrated circuit. The system 100 can be, for instance, a server system and/or a high-performance computing (HPC) system and/or a portion thereof. Although the example shown in
The memory device 104, which is shown in more detail in
The logic circuitry 122 can perform operations on bit strings stored by the memory resource 124 to convert the bit strings between various formats and/or cause the converted bit strings to be transferred to the memory array 130. For example, the conversion operations can include operations to convert floating-point bit strings (e.g., floating-point numbers) to bit strings in a posit format, and vice versa. Once the floating-point bit strings are converted to bit strings in the posit format, the logic circuitry 122 can be configured to perform (or cause performance of) arithmetic operations such as addition, subtraction, multiplication, division, fused multiply addition, multiply-accumulate, dot product units, greater than or less than, absolute value (e.g., FABS( )), fast Fourier transforms, inverse fast Fourier transforms, sigmoid function, convolution, square root, exponent, and/or logarithm operations, and/or recursive logical operations such as AND, OR, XOR, NOT, etc., as well as trigonometric operations such as sine, cosine, tangent, etc. using the posit bit strings. As will be appreciated, the foregoing list of operations is not intended to be exhaustive, nor is the foregoing list of operations intended to be limiting, and the logic circuitry 122 may be configured to perform (or cause performance of) other arithmetic and/or logical operations.
The control circuitry 120 can further include a memory resource 124, which can be communicatively coupled to the logic circuitry 122. The memory resource 124 can include volatile memory resource, non-volatile memory resources, or a combination of volatile and non-volatile memory resources. In some embodiments, the memory resource can be a random-access memory (RAM) such as static random-access memory (SRAM). Embodiments are not so limited, however, and the memory resource can be a cache, one or more registers, NVRAM, ReRAM, FeRAM, MRAM, PCM), “emerging” memory devices such as resistance variable memory resources, phase change memory devices, memory devices that include arrays of self-selecting memory cells, etc., or combinations thereof.
The memory resource 124 can store one or more bit strings. Subsequent to performance of the conversion operation by the logic circuitry 122, the bit string(s) stored by the memory resource 124 can be stored according to a universal number (unum) or posit format. As used herein, the bit string stored in the unum (e.g., a Type III unum) or posit format can include several sub-sets of bits or “bit sub-sets.” For example, a universal number or posit bit string can include a bit sub-set referred to as a “sign” or “sign portion,” a bit sub-set referred to as a “regime” or “regime portion,” a bit sub-set referred to as an “exponent” or “exponent portion,” and a bit sub-set referred to as a “mantissa” or “mantissa portion” (or significand). As used herein, a bit sub-set is intended to refer to a sub-set of bits included in a bit string. Examples of the sign, regime, exponent, and mantissa sets of bits are described in more detail in connection with
In some embodiments, the memory resource 124 can receive data comprising a bit string having a first format that provides a first level of precision (e.g., a floating-point bit string). The logic circuitry 122 can receive the data from the memory resource and convert the bit string to a second format that provides a second level of precision that is different from the first level of precision (e.g., a universal number or posit format). The first level of precision can, in some embodiments, be lower than the second level of precision. For example, if the first format is a floating-point format and the second format is a universal number or posit format, the floating-point bit string may provide a lower level of precision under certain conditions than the universal number or posit bit string, as described in more detail in connection with
The first format can be a floating-point format (e.g., an IEEE 754 format) and the second format can be a universal number (unum) format (e.g., a Type I unum format, a Type II unum format, a Type III unum format, a posit format, a valid format, etc.). As a result, the first format can include a mantissa, a base, and an exponent portion, and the second format can include a mantissa, a sign, a regime, and an exponent portion.
The logic circuitry 122 can be configured to transfer bit strings that are stored in the second format to the memory array 130, which can be configured to cause performance of an arithmetic operation or a logical operation, or both, using the bit string having the second format (e.g., a unum or posit format). In some embodiments, the arithmetic operation and/or the logical operation can be a recursive operation. As used herein, a “recursive operation” generally refers to an operation that is performed a specified quantity of times where a result of a previous iteration of the recursive operation is used an operand for a subsequent iteration of the operation. For example, a recursive multiplication operation can be an operation in which two bit string operands, β and φ are multiplied together and the result of each iteration of the recursive operation is used as a bit string operand for a subsequent iteration. Stated alternatively, a recursive operation can refer to an operation in which a first iteration of the recursive operation includes multiplying β and φ together to arrive at a result λ (e.g., β×φ=λ). The next iteration of this example recursive operation can include multiplying the result λ by φ to arrive at another result ω (e.g., λ×φ=ω).
Another illustrative example of a recursive operation can be explained in terms of calculating the factorial of a natural number. This example, which is given by Equation 1 can include performing recursive operations when the factorial of a given number, n, is greater than zero and returning unity if the number n is equal to zero:
As shown in Equation 1, a recursive operation to determine the factorial of the number n can be carried out until n is equal to zero, at which point the solution is reached and the recursive operation is terminated. For example, using Equation 1, the factorial of the number n can be calculated recursively by performing the following operations: n×(n−1)×(n−2)× . . . ×1.
Yet another example of a recursive operation is a multiply-accumulate operation in which an accumulator, a is modified at iteration according to the equation a←a+(b×c). In a multiply-accumulate operation, each previous iteration of the accumulator a is summed with the multiplicative product of two operands b and c. In some approaches, multiply-accumulate operations may be performed with one or more roundings (e.g., a may be truncated at one or more iterations of the operation). However, in contrast, embodiments herein can allow for a multiply-accumulate operation to be performed without rounding the result of intermediate iterations of the operation, thereby preserving the accuracy of each iteration until the final result of the multiply-accumulate operation is completed.
Examples of recursive operations contemplated herein are not limited to these examples. To the contrary, the above examples of recursive operations are merely illustrative and are provided to clarify the scope of the term “recursive operation” in the context of the disclosure.
As shown in
The embodiment of
In this example, the system 100 includes a host 102 coupled (e.g., connected) to memory device 120, which includes the memory array 130. The host 102 can be a host system such as a personal laptop computer, a desktop computer, a tablet computer, a digital camera, a smart phone, an internet-of-things (IoT) enabled device, or a memory card reader, among various other types of hosts. The host 102 can include a system motherboard and/or backplane and can include a number of processing resources (e.g., one or more processors, microprocessors, or some other type of controlling circuitry). The system 100 can include separate integrated circuits or both the host 110 and the memory device 120 can be on the same integrated circuit. The system 100 can be, for instance, a server system and a high-performance computing (HPC) system and/or a portion thereof. Although the example shown in
For clarity, description of the system 100 has been simplified to focus on features with particular relevance to the present disclosure. For example, in various embodiments, the memory array 130 can be a DRAM array, SRAM array, STT RAM array, PCRAM array, TRAM array, RRAM array, NAND flash array, and NOR flash array, for instance. The memory array 130 can include memory cells arranged in rows coupled by access lines (which may be referred to herein as word lines or select lines) and columns coupled by sense lines (which may be referred to herein as data lines or digit lines). Although a single memory array 130 is shown in
The memory device 120 includes address circuitry 142 to latch address signals provided over an address/control bus 154 (e.g., an address/control bus from the host 102). Address signals are received by address circuitry 142 and decoded by row decode circuitry 146 and column decode circuitry 152 to access the memory array 130. Although the address/control bus 154 is shown as a single bus, the bus 154 can comprise separate address and control busses. The column decode circuitry 152 can comprise logic (e.g., multiplexor circuitry) to selectively couple shared I/O lines to subsets of sensing components in association with reversing data stored in memory in accordance with embodiments described herein. The shared I/O (SIO) lines can provide an increased data path size (e.g., width) as compared to previous data paths used to move data from the array 130 to DQ pads, for instance, among other benefits. For instance, in a number of embodiments, the SIO lines may serve as both local I/O lines and global I/O lines corresponding to array 130, which can facilitate moving data between subarrays (e.g., portions of a memory array being coupled to separate sensing circuitry stripes).
Data can be sensed (read) from memory array 130 by sensing voltage and/or current changes on digit lines using a number of sensing components (e.g., sense amplifiers) of the sensing circuitry 150. A sense amplifier can read and latch a page (e.g., a row) of data from the memory array 130. As described further herein, the sensing components of the sensing circuitry 150 can comprise respective sense amplifiers and corresponding compute components coupled thereto that can be used to sense, store (e.g., cache and/or buffer), and move data, for instance. The I/O circuitry 144 can be used for bi-directional data communication with host 102 over the data bus 156 (e.g., DQ connections). The write circuitry 148 can be used to write data to the memory array 130.
The memory controller 140, which can serve as a sequencer, can decode control signals (e.g., commands) provided by address/control bus 154 from the host 102. These signals can include chip enable signals, write enable signals, and address latch signals that can be used to control operations performed on the memory array 130, including data sense, data store, data move, data write, and data erase operations, among other operations. The memory controller 140 can be responsible for executing instructions from the host 102 and/or accessing the memory array 130. The memory controller 140 can be a state machine, a sequencer, or some other type of controller and can be implemented in hardware, software, firmware, and/or combinations thereof. In the example shown in
Examples of the sensing circuitry 150 are described further below (e.g., in
In a number of embodiments, the sensing circuitry 150 can also be used to perform logical operations (e.g., logical functions such as AND, OR, NOT, NOR, NAND, XOR, etc.) using data stored in memory array 130 as inputs and participate in movement of the data for writing and storage operations back to a different location in the memory array 130 without transferring the data via a sense line address access (e.g., without firing a column decode signal). As such, various compute functions can be performed using, and within, sensing circuitry 150 rather than (or in association with) being performed by processing resources external to the sensing circuitry 150 (e.g., by a processor associated with host 102 and other processing circuitry, such as ALU circuitry, located on device 120, such as on memory controller 140 or elsewhere).
In various previous approaches, data associated with an operand, for instance, would be read from memory via sensing circuitry and provided to external ALU circuitry via I/O lines (e.g., via local I/O lines and global I/O lines). The external ALU circuitry could include a number of registers and would perform compute functions using the operands, and the result would be transferred back to the array via the I/O lines. In contrast, in a number of embodiments of the present disclosure, sensing circuitry 150 is configured to perform logical operations on data stored in memory array 130 and store the result back to the memory array 130 without enabling a local I/O line and global I/O line coupled to the sensing circuitry 150. The sensing circuitry 150 can be formed on pitch with the memory cells of the array. Additional peripheral logic 170, which can include an additional number of sense amplifiers, can be coupled to the sensing circuitry 150. The sensing circuitry 150 and the peripheral logic 170 can cooperate in performing logical operations and/or in reversing data stored in memory, according to a number of embodiments described herein.
As such, in a number of embodiments, circuitry external to memory array 130 and sensing circuitry 150 is not needed to reverse data stored in memory array 130 and/or to perform compute functions as the sensing circuitry 150 can perform the appropriate operations in order to perform such data reversal and/or compute functions without the use of an external processing resource. Therefore, the sensing circuitry 150 may be used to complement and to replace, at least to some extent, such an external processing resource (or at least the bandwidth consumption of such an external processing resource).
The host 202 can be communicatively coupled to the memory device 204 via one or more channels 203, 205. The channels 203, 205 can be interfaces or other physical connections that allow for data and/or commands to be transferred between the host 202 and the memory device 205. For example, commands to cause initiation of an operation (e.g., an operation to convert one or more bit strings from a first format to a second format (or vice versa), an operation to cause the bit strings to be loaded into the sensing circuitry 250 to perform an arithmetic and/or logical operation, etc.) to be performed using the control circuitry 220 can be transferred from the host via the channels 203, 205. It is noted that, in some embodiments, the control circuitry 220 can perform the operations in response to an initiation command transferred from the host 202 via one or more of the channels 203, 205 in the absence of an intervening command from the host 202. That is, once the control circuitry 220 has received the command to initiate performance of an operation from the host 202, the operations can be performed by the control circuitry 220 in the absence of additional commands from the host 202.
As shown in
The register access component 242 can facilitate transferring and fetching of data from the host 202 to the memory device 204 and from the memory device 204 to the host 202. For example, the register access component 242 can store addresses (or facilitate lookup of addresses), such as memory addresses, that correspond to data that is to be transferred to the host 202 from the memory device 204 or transferred from the host 202 to the memory device 204. In some embodiments, the register access component 242 can facilitate transferring and fetching data that is to be operated upon by the control circuitry 220 and/or the register access component 242 can facilitate transferring and fetching data that is has been operated upon by the control circuitry 220, or in response to an action taken by the control circuitry 220, for transfer to the host 202.
The HSI 208 can provide an interface between the host 202 and the memory device 204 for commands and/or data traversing the channel 205. The HSI 208 can be a double data rate (DDR) interface such as a DDR3, DDR4, DDR5, etc. interface. Embodiments are not limited to a DDR interface, however, and the HSI 208 can be a quad data rate (QDR) interface, peripheral component interconnect (PCI) interface (e.g., a peripheral component interconnect express (PCIe)) interface, or other suitable interface for transferring commands and/or data between the host 202 and the memory device 204.
The controller 240 can be responsible for executing instructions from the host 202 and accessing the control circuitry 220 and/or the memory array 230. The controller 240 can be a state machine, a sequencer, or some other type of controller. The controller 240 can receive commands from the host 202 (via the HSI 208, for example) and, based on the received commands, control operation of the control circuitry 220 and/or the memory array 230. In some embodiments, the controller 240 can receive a command from the host 202 to cause performance of an operation using the control circuitry 220. Responsive to receipt of such a command, the controller 240 can instruct the control circuitry 220 to begin performance of the operation(s).
In a non-limiting example, the controller 240 can instruct the control circuitry 220 to perform an operation to retrieve one or more bit strings from the host 202 and/or the memory array 230. For example, the controller 240 can receive a command from the host 202 requesting performance of an operation between one or more bit strings and send a command to the control circuitry 220 to perform the operation. The control circuitry 220 can perform an operation to convert the bit strings from a first format to second and/or cause the bit strings that are stored in the second format to be transferred to, and stored within, the memory array 230. In some embodiments, the control circuitry 220 can determine that the converted bit strings have a same bit string shape. If the converted bit strings do not have a same bit string shape, the control circuitry 220 can perform one or more operations on the converted bit strings to ensure that the converted bit strings have a same bit string shape prior to causing the converted bit strings to be stored in the memory array 230.
For example, the control circuitry 220 can, based on commands received from the controller, manipulate a quantity of bits associated with one or more bit sub-sets of the converted bit strings such that the converted bit strings have a same bit string shape. In some embodiments, manipulating the quantity of bits within the bit sub-sets of the converted bit strings can include removing one or more bits from particular bit sub-sets of at least one of the one or more converted bit strings to ensure that the converted bit strings have a same bit string shape.
Subsequent to the control circuitry 220 ensuring that the converted bit strings have the same bit string shape, the control circuitry 220 can cause the bit strings to be transferred to the memory array 230 and/or the sensing circuitry 250. Once the bit strings have been received by the memory array 230 and/or sensing circuitry 250, the sensing circuitry 250 can perform arithmetic and/or logical operations using the converted bit strings.
In some embodiments, the controller 240 can be a global processing controller and may provide power management functions to the memory device 204. Power management functions can include control over power consumed by the memory device 204 and/or the memory array 230. For example, the controller 240 can control power provided to various banks of the memory array 230 to control which banks of the memory array 230 are operational at different times during operation of the memory device 204. This can include shutting certain banks of the memory array 230 down while providing power to other banks of the memory array 230 to optimize power consumption of the memory device 230. In some embodiments, the controller 240 controlling power consumption of the memory device 204 can include controlling power to various cores of the memory device 204 and/or to the control circuitry 220, the memory array 230, etc.
As mentioned above, the sensing circuitry 250 can provide additional storage space for the memory array 230 and can sense (e.g., read, store, cache) data values that are present in the memory device 204. The sensing circuitry 250 can include sense amplifiers, latches, flip-flops, etc. that can be configured to perform operations (e.g., arithmetic and/or logical operations) using the bit strings, as described herein.
As shown in
However, embodiments are not limited to scenarios in which the sensing circuitry 250 includes around 16K location in which to store data values. For example, the sensing component 250 can be configured to store around 2K data values, around 4K data values, around 8K data values, etc. Further, although a single box is shown as illustrating the sensing component 250 in
As described in more detail in connection with
If the arithmetic and/or logical operations performed using the sensing circuitry 250 are recursive operations, in some embodiments, the periphery logic 270 can be configured to store intermediate results of recursive operations performed using bit strings. In some embodiments, the intermediate results of the recursive operations can represent a result generated at each iteration of the recursive operation. In contrast to some approaches, because the periphery logic 270 can be configured to store up to 16K data values, the intermediate results of the recursive operations may not to be rounded (e.g. truncated) during performance of the recursive operation.
Instead, in some embodiments, a final result of the recursive operation that is stored in the periphery logic 270 upon completion of the recursive operation may be rounded to a desired bit width (e.g., 8-bits, 16-bits, 32-bits, 64-bits, etc.). This can improve the accuracy of the result of the recursive operation, because, in contrast to approaches that do not utilize the periphery logic 270 to store the intermediate results of the recursive operation, intermediate results of the recursive may not need to be rounded before the final result of the recursive operation is computed.
The periphery logic 270 can be configured to overwrite previously stored intermediate results of the recursive operation when a new iteration of the recursive operation is completed. For example, a result that represents the first iteration of a recursive operation can be stored in the periphery logic 270 once the first iteration of the recursive operation is complete. Once a result that represents a second iteration of the recursive operation is completed, the result of the second iteration of the recursive operation can be stored in the periphery logic 270. Similarly, once a result that represents a third iteration of the recursive operation is completed, the result of the third iteration of the recursive operation can be stored in the periphery logic 270. In some embodiments, the result of each subsequent iteration can be stored in the periphery logic 270 by overwriting the stored result of the previous iteration.
Depending on the bit string shape (e.g., the bit width) of the result of each iteration, subsequent bit strings that represent the result of each iteration and are stored in the periphery logic 270 may be stored using more sense amplifiers in the periphery logic 270 than preceding stored bit strings. For example, the result of the first iteration may contain a first quantity of bits and the result of the second iteration may contain a second quantity of bits that is greater than the first quantity of bits. When the result of the second iteration is written to or stored by the periphery logic 270, it may be stored such that the result of the first iteration is overwritten, however, because the result of the second iteration may contain more bits that the result of the first iteration, in some embodiments, additional sense amplifiers of the periphery logic 270 may be used to store the result of the second iteration in addition to the sense amplifiers that were used to store the result of the first iteration.
In a simplified, non-limiting example in which the recursive operation comprises a recursive multiplication operation in which a number 2.51 is recursively multiplied with a number 3.73, the result of the first iteration may be 9.3623. In this example, the result of the first iteration includes five bits and can be stored, for example, in five sense amplifiers in the periphery logic 270. The result of the second iteration (e.g., the result of multiplication between the first result 9.3623 and 3.73) can be 34.921379, which includes eight bits. In some embodiments, the result of the second iteration can be stored in eight sense amplifiers of the periphery logic 270 by, for example, overwriting the result of the first iteration that are stored in five sense amplifiers and writing the additional three bits to three other sense amplifiers in the periphery logic 270. The results of subsequent iterations of the recursive operation can similarly be stored in the sensing component 250 such that the result of the preceding iteration is overwritten. Embodiments are not so limited, however, and in some embodiments, the results of each iteration can be stored in adjacent sense amplifiers in the sensing component 250, or in particular sense amplifiers of the periphery logic 270.
In some embodiments, access to the periphery logic 270 can be controlled using a register mapping. For example, bit strings can be stored in the periphery logic 270, deleted from the periphery logic 270, and/or the bit width of bit strings stored in the periphery logic 270 can be altered in response to commands associated with a registry mapping that can be stored in the control circuitry 220. In addition, bit strings stored in the memory array 230 can be added to or subtracted from (e.g., accumulated with) bit strings stored in the periphery logic 270 in response to commands associated with the control circuitry 220.
The control circuitry 220 can also include commands associated with converting results of operations performed as part of a recursive operation using universal number or posit bit strings between the universal number or posit format and formats that can be stored in the sensing component 250, the periphery logic 270, and/or the memory array 230, as described in more detail in connection with
The main memory input/output (I/O) circuitry 244 can facilitate transfer of data and/or commands to and from the memory array 230. For example, the main memory I/O circuitry 244 can facilitate transfer of bit strings, data, and/or commands from the host 202 and/or the control circuitry 220 to and from the memory array 230. In some embodiments, the main memory I/O circuitry 214 can include one or more direct memory access (DMA) components that can transfer the bit strings (e.g., posit bit strings stored as blocks of data) from the control circuitry 220 to the memory array 230, and vice versa.
In some embodiments, the main memory I/O circuitry 244 can facilitate transfer of bit strings, data, and/or commands from the memory array 230 to the control circuitry 220 so that the control circuitry 220 can perform operations on the bit strings. Similarly, the main memory I/O circuitry 244 can facilitate transfer of bit strings that have had one or more operations performed on them by the control circuitry 220 to the memory array 230. As described in more detail herein, the operations can include recursive operations performed using bit string (e.g., universal number bit strings or posit bit strings) in which results of intermediate iterations are stored in the periphery logic 270.
As described above, posit bit strings (e.g., the data) can be stored and/or retrieved from the memory array 230. In some embodiments, the main memory I/O circuitry 244 can facilitate storing and/or retrieval of the posit bit strings to and/or from the memory array 230. For example, the main memory I/O circuitry 244 can be enabled to transfer posit bit strings to the memory array 230 to be stored, and/or the main memory I/O circuitry 244 can facilitate retrieval of the posit bit strings (e.g., posit bit strings representing a performed operation between one or more posit bit string operands) from the memory array 230 in response to, for example, a command from the controller 210 and/or the control circuitry 220.
The row address strobe (RAS)/column address strobe (CAS) chain control circuitry 216 and the RAS/CAS chain component 218 can be used in conjunction with the memory array 230 to latch a row address and/or a column address to initiate a memory cycle. In some embodiments, the RAS/CAS chain control circuitry 216 and/or the RAS/CAS chain component 218 can resolve row and/or column addresses of the memory array 230 at which read and write operations associated with the memory array 230 are to be initiated or terminated. For example, upon completion of an operation using the control circuitry 220, the RAS/CAS chain control circuitry 216 and/or the RAS/CAS chain component 218 can latch and/or resolve a specific location in the periphery sense amplifiers 211 and/or the memory array 230 to which the bit strings that have been operated upon by the control circuitry 220 are to be stored. Similarly, the RAS/CAS chain control circuitry 216 and/or the RAS/CAS chain component 218 can latch and/or resolve a specific location in the periphery sense amplifiers 211 and/or the memory array 230 from which bit strings are to be transferred to the control circuitry 220 prior to, or subsequent to, the control circuitry 220 performing an operation (e.g., a recursive operation) on the bit string(s).
The control circuitry 220 can include logic circuitry (e.g., the logic circuitry 122 illustrated in
As described in more detail in connection with
In some embodiments, once the bit strings have been converted to the universal number or posit format by the control circuitry 220 and stored in the memory array 230, the memory array 230 can, in conjunction with the sensing circuitry 250, perform (or cause performance of) arithmetic and/or logical operations on the universal number or posit bit strings. For example, the sensing circuitry 250, which is further described below in connection with
In some embodiments, the sensing circuitry 250 may perform the above-listed operations in conjunction with execution of one or more machine learning algorithms. For example, the sensing circuitry 250 may perform operations related to one or more neural networks. Neural networks may allow for an algorithm to be trained over time to determine an output response based on input signals. For example, over time, a neural network may essentially learn to better maximize the chance of completing a particular goal. This may be advantageous in machine learning applications because the neural network may be trained over time with new data to achieve better maximization of the chance of completing the particular goal. A neural network may be trained over time to improve operation of particular tasks and/or particular goals. However, in some approaches, machine learning (e.g., neural network training) may be processing intensive (e.g., may consume large amounts of computer processing resources) and/or may be time intensive (e.g., may require lengthy calculations that consume multiple cycles to be performed).
In contrast, by performing such operations using the sensing circuitry 250, for example, by performing such operations on bit strings in the universal number or posit format, the amount of processing resources and/or the amount of time consumed in performing the operations may be reduced in comparison to approaches in which such operations are performed using bit strings in a floating-point format. Further, by storing intermediate results of the recursive operations in the periphery logic 270, the accuracy of a bit string that represents the final result of the recursive operation may be higher in comparison to approaches that truncate intermediate results of recursive operations or in approaches in which intermediate results of recursive operations are stored in a hidden scratch area.
In some embodiments, the controller 210 can be configured to cause the control circuitry 220 and/or the sensing circuitry 250 to perform operations using bit strings without encumbering the host 202 (e.g., without receiving an intervening command or a command separate from a command to initiate performance of the operation from the host 202 and/or without transferring results of the operations to the host 202). Embodiments are not so limited, however, and in some embodiments, the controller 210 can be configured to cause the control circuitry 220 (e.g., the logic circuitry) and/or the sensing circuitry 250 to perform recursive arithmetic and/or recursive logical operations using bit strings, store intermediate results of such operations in the sensing circuitry 250 and/or round the final result of the recursive operation (which may be stored in the sensing circuitry 250 such that the final result of the recursive operation has a particular bit string shape associated therewith.
In some embodiments, the performance of the recursive operation can include performing an arithmetic operation, a logical operation, a bitwise operation, a vector operation, or combinations thereof. In response to a determination that the recursive operation is completed, the control circuitry 220 can be configured to cause a last resultant bit string stored in the periphery logic 270 to be rounded (e.g., truncated) such that the last resultant bit string has a particular bit width. For example, the control circuitry 220 can cause the last resultant bit string stored in the periphery logic 270 to be rounded off to have a bit width of 8-bits, 16-bits, 32-bits, 64-bits, etc. In some embodiments, the control circuitry 220 can be configured to cause at least one bit from a mantissa bit sub-set or an exponent bit sub-set (which are described in more detail in connection with
As described above in connection with
In some embodiments, bit strings (e.g., posit bit strings) can be generated and/or stored in the memory array 230 without encumbering the host 202. For example, the bit strings can be generated and/or stored in the memory array 230 without receiving multiple commands from the host 202. Stated alternatively, in some embodiments, the host 202 can send a single command to the memory device to request performance of an operation using one or more bit strings. Responsive to receipt of the command to request performance of the operation, the memory device 204 (e.g., the controller 210, the control circuitry 220, or other components of the memory device 204) can perform the operation and/or retrieve a stored result of the operation in the absence of additional commands from the host 202. This can reduce traffic across the channels 203/205, which can increase performance of a computing device associated with the host 202 and/or the memory device 204.
In a non-limiting example, the sensing circuitry 250 can include a sense amplifier (e.g., the sense amplifier 654 illustrated in
In some embodiments, the sensing circuitry 250 can be configured to perform the arithmetic operation, the logical operation, or both by performing a first operation phase of the arithmetic operation, the logical operation, or both by sensing a memory cell of the array 230 that contains a first bit of the one or more bit strings, performing a number of intermediate operation phases of the arithmetic operation, the logical operation, or both by sensing a respective number of different memory cells that contain different bits of the one or more bit strings, and accumulating a result of the of the first operation phase and the number of intermediate operation phases in the compute component of the sensing circuitry 250. The sensing circuitry 250 can be configured to accumulate the result of the of the first operation phase and the number of intermediate operation phases in the compute component of the sensing circuitry 250 without performing a sense line address access. For example, the sensing circuitry 250 can be configured to accumulate the result of the of the first operation phase and the number of intermediate operation phases in the compute component of the sensing circuitry 250 prior to receiving an access command and/or an address for a sense line associated with the sensing circuitry 250.
In some embodiments, the sensing circuitry 250 can be further configured to store a result of the arithmetic operation, the logical operation, or both in the memory array without enabling an input/output (I/O) line (e.g., the I/O circuitry 244) coupled to the sensing circuitry 250. Accordingly, in some embodiments, the sensing circuitry 250 can be configured to perform arithmetic and/or logical operations using the universal number or posit bit strings without encumbering the host 202.
The control circuitry 220 can be configured to receive the one or more bit strings in a format different than the universal number format or the posit format, perform an operation to convert the one or more bit strings from the format different than the universal number format or the posit format to the universal number format or the posit format such that the one or more bit strings have the same bit string shape, and cause the one or more bit strings that are formatted according to the universal number format or the posit format to be transferred to the memory array 230 prior to the controller 240 causing the one or more bit strings that are formatted according to the universal number format or the posit format to be transferred to the sensing circuitry 250. As described herein, the control circuitry 220 can include an arithmetic logic unit, a field programmable gate array, a reduced instruction set computing device, or a combination thereof.
In some embodiments, the controller 240 can be configured to determine that at least two of the one or more bit strings have a same quantity of bits or a same data type associated therewith and cause the sensing circuitry 250 to perform the arithmetic operation, the logical operation, or both using the at least two of the one or more bit strings in response to the determination.
In another non-limiting example, the host 202 can be coupled to a memory device 204, which can include a controller 240, control circuitry 220, and sensing circuitry 250. As described above, the control circuitry 220 can include a memory resource (e.g., the memory resource 124 illustrated in
Subsequent to performance of the arithmetic operation, the logical operation, or both, the control circuitry 220 can be configured to receive the result of the arithmetic operation, the logical operation, or both and having the second format and convert, using the logic circuitry, the result of the arithmetic operation, the logical operation, or both from the second format to the first format. As described above, one of the first format or the second format can include a mantissa, a base, and an exponent portion, and the other of the first format or the second format can include a mantissa, a sign, a regime, and an exponent portion.
In embodiments in which the arithmetic and/or logical operation is a recursive arithmetic or logical operation, the controller 240 can be configured to cause intermediate exact (e.g., un-rounded) results of the arithmetic operation, the logical operation, or both to be transferred to a plurality of storage locations in a periphery region of the memory device. In some embodiments, the storage location can include sense amplifiers that are in the periphery of the memory array 230.
As described above, in some embodiments, the logic circuitry can be configured to receive the bit string and convert the bit string from the first format to the second format in the absence of receipt of an intervening command from the host 202. Further, in some embodiments, the sensing circuitry 250 can be configured to perform an arithmetic operation, a logical operation, or both using the bit string having the second format in the absence of receipt of an intervening command from the host 202.
As shown in
As described above, circuitry located on the memory device 204 (e.g., the control circuitry 220 and/or memory array 230 illustrated in
The FPGA 221 can include a state machine 227 and/or register(s) 229. The state machine 227 can include one or more processing devices that are configured to perform operations on an input and produce an output. For example, the FPGA 221 can be configured to receive posit bit strings from the host 202 or the memory device 204 and perform one or more operations using the universal number format or posit format bit strings. The register(s) 229 of the FPGA 221 can be configured to buffer and/or store the posit bit strings received form the host 202 prior to the state machine 227 performing operations using the received bit strings. In addition, the register(s) 229 of the FPGA 221 can be configured to buffer and/or store intermediate results of iterations of recursive operations performed FPGA 221 prior to transferring the result to circuitry external to the ASIC 233, such as the host 202 or the memory device 204, etc.
The ASIC 223 can include logic 215 and/or a cache 217. The logic 215 can include circuitry configured to perform operations on an input and produce an output. In some embodiments, the ASIC 223 is configured to receive universal number format or posit format bit strings from the host 202 and/or the memory device 204 and perform one or more operations using posit bit string operands. The cache 217 of the ASIC 223 can be configured to buffer and/or store the bit strings received form the host 202 prior to the logic 215 performing an operation on the received bit strings. In addition, the cache 217 of the ASIC 223 can be configured to buffer and/or store intermediate results of iterations of recursive operations using the bit strings prior to transferring the result to circuitry external to the ASIC 233, such as the host 202 or the memory device 204, etc.
Although the FPGA 227 is shown as including a state machine 227 and register(s) 229, in some embodiments, the FPGA 221 can include logic, such as the logic 215, and/or a cache, such as the cache 217 in addition to, or in lieu of, the state machine 227 and/or the register(s) 229. Similarly, the ASIC 223 can, in some embodiments, include a state machine, such as the state machine 227, and/or register(s), such as the register(s) 229 in addition to, or in lieu of, the logic 215 and/or the cache 217.
The sign bit 333 can be zero (0) for positive numbers and one (1) for negative numbers. The regime bits 335 are described in connection with Table 1, below, which shows (binary) bit strings and their related numerical meaning, k. In Table 1, the numerical meaning, k, is determined by the run length of the bit string. The letter x in the binary portion of Table 1 indicates that the bit value is irrelevant for determination of the regime, because the (binary) bit string is terminated in response to successive bit flips or when the end of the bit string is reached. For example, in the (binary) bit string 0010, the bit string terminates in response to a zero flipping to a one and then back to a zero. Accordingly, the last zero is irrelevant with respect to the regime and all that is considered for the regime are the leading identical bits and the first opposite bit that terminates the bit string (if the bit string includes such bits).
In
If m corresponds to the number of identical bits in the bit string, if the bits are zero, k=−m. If the bits are one, then k=m−1. This is illustrated in Table 1 where, for example, the (binary) bit string 10XX has a single one and k=m−1=1−1=0. Similarly, the (binary) bit string 0001 includes three zeros so k=−m=−3. The regime can indicate a scale factor of useedk, where useed=22
The exponent bits 337 correspond to an exponent e, as an unsigned number. In contrast to floating-point numbers, the exponent bits 337 described herein may not have a bias associated therewith. As a result, the exponent bits 337 described herein may represent a scaling by a factor of 2e. As shown in
The mantissa bits 339 (or fraction bits) represent any additional bits that may be part of the n-bit posit 331 that lie to the right of the exponent bits 337. Similar to floating-point bit strings, the mantissa bits 339 represent a fraction ƒ, which can be analogous to the fraction 1.ƒ where ƒ includes one or more bits to the right of the decimal point following the one. In contrast to floating-point bit strings, however, in the n-bit posit 331 shown in
As described herein, alter a numerical value or a quantity of bits of one of more of the sign 333 bit sub-set, the regime 335 bit sub-set, the exponent 337 bit sub-set, or the mantissa 339 bit sub-set can vary the precision of the n-bit posit 331. For example, changing the total number of bits in the n-bit posit 331 can alter the resolution of the n-bit posit bit string 331. That is, an 8-bit posit can be converted to a 16-bit posit by, for example, increasing the numerical values and/or the quantity of bits associated with one or more of the posit bit string's constituent bit sub-sets to increase the resolution of the posit bit string. Conversely, the resolution of a posit bit string can be decreased for example, from a 64-bit resolution to a 32-bit resolution by decreasing the numerical values and/or the quantity of bits associated with one or more of the posit bit string's constituent bit sub-sets.
In some embodiments, altering the numerical value and/or the quantity of bits associated with one or more of the regime 335 bit sub-set, the exponent 337 bit sub-set, and/or the mantissa 339 bit sub-set to vary the precision of the n-bit posit 331 can lead to an alteration to at least one of the other of the regime 335 bit sub-set, the exponent 337 bit sub-set, and/or the mantissa 339 bit sub-set. For example, when altering the precision of the n-bit posit 331 to increase the resolution of the n-bit posit bit string 331 (e.g., when performing an “up-convert” operation to increase the bit width of the n-bit posit bit string 331), the numerical value and/or the quantity of bits associated with one or more of the regime 335 bit sub-set, the exponent 337 bit sub-set, and/or the mantissa 339 bit sub-set may be altered.
In a non-limiting example in which the resolution of the n-bit posit bit string 331 is increased (e.g., the precision of the n-bit posit bit string 331 is varied to increase the bit width of the n-bit posit bit string 331) but the numerical value or the quantity of bits associated with the exponent 337 bit sub-set does not change, the numerical value or the quantity of bits associated with the mantissa 339 bit sub-set may be increased. In at least one embodiment, increasing the numerical value and/or the quantity of bits of the mantissa 339 bit sub-set when the exponent 338 bit sub-set remains unchanged can include adding one or more zero bits to the mantissa 339 bit sub-set.
In another non-limiting example in which the resolution of the n-bit posit bit string 331 is increased (e.g., the precision of the n-bit posit bit string 331 is varied to increase the bit width of the n-bit posit bit string 331) by altering the numerical value and/or the quantity of bits associated with the exponent 337 bit sub-set, the numerical value and/or the quantity of bits associated with the regime 335 bit sub-set and/or the mantissa 339 bit sub-set may be either increased or decreased. For example, if the numerical value and/or the quantity of bits associated with the exponent 337 bit sub-set is increased or decreased, corresponding alterations may be made to the numerical value and/or the quantity of bits associated with the regime 335 bit sub-set and/or the mantissa 339 bit sub-set. In at least one embodiment, increasing or decreasing the numerical value and/or the quantity of bits associated with the regime 335 bit sub-set and/or the mantissa 339 bit sub-set can include adding one or more zero bits to the regime 335 bit sub-set and/or the mantissa 339 bit sub-set and/or truncating the numerical value or the quantity of bits associated with the regime 335 bit sub-set and/or the mantissa 339 bit sub-set.
In another example in which the resolution of the n-bit posit bit string 331 is increased (e.g., the precision of the n-bit posit bit string 331 is varied to increase the bit width of the n-bit posit bit string 331), the numerical value and/or the quantity of bits associated with the exponent 335 bit sub-set may be increased and the numerical value and/or the quantity of bits associated with the regime 333 bit sub-set may be decreased. Conversely, in some embodiments, the numerical value and/or the quantity of bits associated with the exponent 335 bit sub-set may be decreased and the numerical value and/or the quantity of bits associated with the regime 333 bit sub-set may be increased.
In a non-limiting example in which the resolution of the n-bit posit bit string 331 is decreased (e.g., the precision of the n-bit posit bit string 331 is varied to decrease the bit width of the n-bit posit bit string 331) but the numerical value or the quantity of bits associated with the exponent 337 bit sub-set does not change, the numerical value or the quantity of bits associated with the mantissa 339 bit sub-set may be decreased. In at least one embodiment, decreasing the numerical value and/or the quantity of bits of the mantissa 339 bit sub-set when the exponent 338 bit sub-set remains unchanged can include truncating the numerical value and/or the quantity of bits associated with the mantissa 339 bit sub-set.
In another non-limiting example in which the resolution of the n-bit posit bit string 331 is decreased (e.g., the precision of the n-bit posit bit string 331 is varied to decrease the bit width of the n-bit posit bit string 331) by altering the numerical value and/or the quantity of bits associated with the exponent 337 bit sub-set, the numerical value and/or the quantity of bits associated with the regime 335 bit sub-set and/or the mantissa 339 bit sub-set may be either increased or decreased. For example, if the numerical value and/or the quantity of bits associated with the exponent 337 bit sub-set is increased or decreased, corresponding alterations may be made to the numerical value and/or the quantity of bits associated with the regime 335 bit sub-set and/or the mantissa 339 bit sub-set. In at least one embodiment, increasing or decreasing the numerical value and/or the quantity of bits associated with the regime 335 bit sub-set and/or the mantissa 339 bit sub-set can include adding one or more zero bits to the regime 335 bit sub-set and/or the mantissa 339 bit sub-set and/or truncating the numerical value or the quantity of bits associated with the regime 335 bit sub-set and/or the mantissa 339 bit sub-set.
In some embodiments, changing the numerical value and/or a quantity of bits in the exponent bit sub-set can alter the dynamic range of the n-bit posit 331. For example, a 32-bit posit bit string with an exponent bit sub-set having a numerical value of zero (e.g., a 32-bit posit bit string with es=0, or a (32,0) posit bit string) can have a dynamic range of approximately 18 decades. However, a 32-bit posit bit string with an exponent bit sub-set having a numerical value of 3 (e.g., a 32-bit posit bit string with es=3, or a (32,3) posit bit string) can have a dynamic range of approximately 145 decades.
In the example of
If maxpos is the largest positive value of a bit string of the posits 431-1, 431-2, 431-3 and minpos is the smallest value of a bit string of the posits 431-1, 431-2, 431-3, maxpos may be equivalent to useed and minpos may be equivalent to
Between maxpos and ±∞, a new bit value may be maxpos*useed, and between zero and minpos, a new bit value may be
These new bit values can correspond to a new regime bit 335. Between existing values x=2m and y=2n, where m and n differ by more than one, the new bit value may be given by the geometric mean:
which corresponds to a new exponent bit 337. If the new bit value is midway between the existing x and y values next to it, the new bit value can represent the arithmetic mean
which corresponds to a new mantissa bit 339.
As an illustrative example of adding bits to the 3-bit posit 431-1 to create the 4-bit posit 431-2 of
A non-limiting example of decoding a posit (e.g., a posit 431) to obtain its numerical equivalent follows. In some embodiments, the bit string corresponding to a posit p is an unsigned integer ranging from −2n-1 to 2n-1, k is an integer corresponding to the regime bits 335 and e is an unsigned integer corresponding to the exponent bits 337. If the set of mantissa bits 339 is represented as {ƒ1 ƒ2 . . . ƒfs} and ƒ is a value represented by 1.ƒ1 ƒ2 . . . ƒfs (e.g., by a one followed by a decimal point followed by the mantissa bits 339), the p can be given by Equation 2, below.
A further illustrative example of decoding a posit bit string is provided below in connection with the posit bit string 0000110111011101 shown in Table 3, below follows.
In Table 3, the posit bit string 0000110111011101 is broken up into its constituent sets of bits (e.g., the sign bit 333, the regime bits 335, the exponent bits 337, and the mantissa bits 339). Since es=3 in the posit bit string shown in Table 3 (e.g., because there are three exponent bits), useed=256. Because the sign bit 333 is zero, the value of the numerical expression corresponding to the posit bit string shown in Table 3 is positive. The regime bits 335 have a run of three consecutive zeros corresponding to a value of −3 (as described above in connection with Table 1). As a result, the scale factor contributed by the regime bits 335 is 256−3 (e.g., useedk). The exponent bits 337 represent five (5) as an unsigned integer and therefore contribute an additional scale factor of 2e=25=32. Lastly, the mantissa bits 339, which are given in Table 3 as 11011101, represent two-hundred and twenty-one (221) as an unsigned integer, so the mantissa bits 339, given above as ƒ are
Using these values and Equation 2, the numerical value corresponding to the posit bit string given in Table 3 is
The control circuitry 520 can be configured to receive a command (e.g., an initiation command) from a host (e.g., the host 102/202 illustrated in
The logic circuitry 522 can be an arithmetic logic unit (ALU), a state machine, sequencer, controller, an instruction set architecture, one or more processors (e.g., processing device(s) or processing unit(s)), or other type of control circuitry. As described above, an ALU can include circuitry to perform operations (e.g., conversion operations convert bit strings between various formats that supports different levels of precision, etc.) such as the operations described above, using integer binary numbers, such as bit strings in the universal number or posit format. An instruction set architecture (ISA) can include a reduced instruction set computing (RISC) device. In embodiments in which the logic circuitry 522 includes a RISC device, the RISC device can include a processing resource or processing unit that can employ an instruction set architecture (ISA) such as a RISC-V ISA, however, embodiments are not limited to RISC-V ISAs and other processing devices and/or ISAs can be used.
In some embodiments, the logic circuitry 522 can be configured to execute instructions (e.g., instructions stored in the INSTR 525 portion of the memory resource 524) to perform the operations herein. For example, the logic circuitry 524 is provisioned with sufficient processing resources to cause performance of operations to convert the bits strings between various formats and/or cause the sensing circuitry to perform arithmetic and/or logical operations using the converted bit strings received by the control circuitry 520.
Once the operation(s) are performed by the logic circuitry 522, the resultant bit strings can be stored in the memory resource 524 and/or a memory array (e.g., the memory array 230 illustrated in
The memory resource 524 can, in some embodiments, be a memory resource such as random-access memory (e.g., RAM, SRAM, etc.). Embodiments are not so limited, however, and the memory resource 524 can include various registers, caches, buffers, and/or memory arrays (e.g., 1T1C, 2T2C, 3T, etc. DRAM arrays). The memory resource 524 can be configured to receive a bit string(s) from, for example, a host such as the host 202 illustrated in
The memory resource 524 can be partitioned into one or more addressable memory regions. As shown in
As discussed above, the bit string(s) can be retrieved from the host and/or memory array in response to messages and/or commands generated by the host, a controller (e.g., the controller 210 illustrated in
In a non-limiting neural network training application, the control circuitry 520 can convert a 16-bit posit with es=0 into an 8-bit posit with es=0 for use in a neural network training application. In some approaches, a half-precision 16-bit floating-point bit string can be used for neural network training, however, in contrast to some approaches that utilize a half-precision 16-bit floating-point bit string for neural network training, an 8-bit posit bit string with es=0 can provide comparable neural network training results two to four times faster than the half-precision 16-bit floating-point bit string.
For example, if the control circuitry 520 receives a 16-bit posit bit string with es=0 for use in a neural network training application, the control circuitry 520 can selectively remove bits from one or more bit sub-sets of the 16-bit posit bit string to vary the precision of the 16-bit posit bit string to an 8-bit posit bit string with es=0. It will be appreciated that embodiments are not so limited, and the control circuitry 520 can vary the precision of the bit string to produce an 8-bit posit bit string with es=1 (or some other value). In addition, the control circuitry 520 can vary the precision of the 16-bit posit bit string to yield a 32-bit posit bit string (or some other value).
During performance of the operations connected with the above example, the control circuitry 520 can be configured to cause results of the operation at each iteration to be stored in circuitry in the periphery of a memory device or memory array. For example, the control circuitry 520 can be configured to cause results of the operation at each iteration to be stored in a plurality of peripheral sense amplifiers such as the periphery logic 270 illustrated in
A common function used in training neural networks is a sigmoid function ƒ(x) (e.g., a function that asymptotically approaches zero as x→−∞ and asymptotically approaches 1 as x→∞). An example of a sigmoid function that may be used in neural network training applications is
which can require upwards of one-hundred clock cycles to compute using half-precision 16-bit floating-point bit strings. However, using an 8-bit posit with es=0, the same function can be evaluated by flipping the first bit of the posit representing x and shifting two bits to the right—operations that may take at least an order of magnitude fewer clock signals in comparison to evaluation of the same function using a half-precision 16-bit floating-point bit string.
By allowing for results of iterations of the evaluation of the sigmoid function to be preserved without rounding or truncating the results of the iterations, the accuracy of the final result can be improved in comparison to approaches in which intermediate results of the operation are rounded or truncated. For example, by storing intermediate results of a recursive operation performed using the sensing circuitry to evaluate a sigmoid function in, for example, periphery sense amplifiers such as the periphery logic 270 illustrated in
In this example, by operating the control circuitry 520 to vary the precision of the posit bit string to yield a more desirable level of precision, processing time, resource consumption, and/or storage space can be reduced in comparison to approaches that do not include control circuitry 520 configured to perform such conversion and/or subsequent operations. This reduction in processing time, resource consumption, and/or storage space can improve the function of a computing device in which the control circuitry 520 is operating by reducing the number of clock signals used in performing such operations, which may reduce an amount of power consumed by the computing device and/or an amount of time to perform such operations, as well as by freeing up processing and/or memory resources for other tasks and functions.
In the example shown in
The cells of the memory array 630 can be arranged in rows coupled by access lines 662-X (Row X), 662-Y (Row Y), etc., and columns coupled by pairs of complementary sense lines (e.g., digit lines 653-1 labelled DIGIT(n) and 653-2 labelled DIGIT(n)_in
Memory cells can be coupled to different digit lines and word lines. For instance, in this example, a first source/drain region of transistor 651-1 is coupled to digit line 653-1, a second source/drain region of transistor 651-1 is coupled to capacitor 652-1, and a gate of transistor 651-1 is coupled to word line 662-Y. A first source/drain region of transistor 651-2 is coupled to digit line 653-2, a second source/drain region of transistor 651-2 is coupled to capacitor 652-2, and a gate of transistor 651-2 is coupled to word line 662-X. A cell plate, as shown in
The digit lines 653-1 and 653-2 of memory array 630 are coupled to sensing component 650 in accordance with a number of embodiments of the present disclosure. In this example, the sensing component 650 comprises a sense amplifier 654 and a compute component 665 corresponding to a respective column of memory cells (e.g., coupled to a respective pair of complementary digit lines). The sense amplifier 654 is coupled to the pair of complementary digit lines 653-1 and 653-2. The compute component 665 is coupled to the sense amplifier 654 via pass gates 655-1 and 655-2. The gates of the pass gates 655-1 and 655-2 can be coupled to selection logic 613.
The selection logic 613 can include pass gate logic for controlling pass gates that couple the pair of complementary digit lines un-transposed between the sense amplifier 654 and the compute component 665 and swap gate logic for controlling swap gates that couple the pair of complementary digit lines transposed between the sense amplifier 654 and the compute component 665. The selection logic 613 can be coupled to the pair of complementary digit lines 653-1 and 653-2 and configured to perform logical operations on data stored in array 630. For instance, the selection logic 613 can be configured to control continuity of (e.g., turn on/turn off) pass gates 655-1 and 655-2 based on a selected logical operation that is being performed.
The sense amplifier 654 can be operated to determine a data value (e.g., logic state) stored in a selected memory cell. The sense amplifier 654 can comprise a cross coupled latch 615 (e.g., gates of a pair of transistors, such as n-channel transistors 661-1 and 661-2 are cross coupled with the gates of another pair of transistors, such as p-channel transistors 629-1 and 629-2), which can be referred to herein as a primary latch. However, embodiments are not limited to this example.
In operation, when a memory cell is being sensed (e.g., read), the voltage on one of the digit lines 653-1 or 653-2 will be slightly greater than the voltage on the other one of digit lines 653-1 or 653-2. An ACT signal and an RNL*signal can be driven low to enable (e.g., fire) the sense amplifier 654. The digit line 653-1 or 653-2 having the lower voltage will turn on one of the transistors 629-1 or 629-2 to a greater extent than the other of transistors 629-1 or 629-2, thereby driving high the digit line 654-1 or 654-2 having the higher voltage to a greater extent than the other digit line 654-1 or 654-2 is driven high.
Similarly, the digit line 654-1 or 654-2 having the higher voltage will turn on one of the transistors 661-1 or 661-2 to a greater extent than the other of the transistors 661-1 or 661-2, thereby driving low the digit line 654-1 or 654-2 having the lower voltage to a greater extent than the other digit line 654-1 or 654-2 is driven low. As a result, after a short delay, the digit line 654-1 or 654-2 having the slightly greater voltage is driven to the voltage of the supply voltage Vcc through a source transistor, and the other digit line 654-1 or 654-2 is driven to the voltage of the reference voltage (e.g., ground) through a sink transistor. Therefore, the cross coupled transistors 661-1 and 661-2 and transistors 629-1 and 629-2 serve as a sense amplifier pair, which amplify the differential voltage on the digit lines 654-1 and 654-2 and operate to latch a data value sensed from the selected memory cell.
Embodiments are not limited to the sensing component configuration illustrated in
The sensing component 650 can be one of a plurality of sensing components selectively coupled to a shared I/O line. As such, the sensing component 650 can be used in association with reversing data stored in memory in accordance with a number of embodiments of the present disclosure.
In this example, the sense amplifier 654 includes equilibration circuitry 659, which can be configured to equilibrate the digit lines 654-1 and 654-2. The equilibration circuitry 659 comprises a transistor 658 coupled between digit lines 654-1 and 654-2. The equilibration circuitry 659 also comprises transistors 656-1 and 656-2 each having a first source/drain region coupled to an equilibration voltage (e.g., VDD/2), where VDD is a supply voltage associated with the array. A second source/drain region of transistor 656-1 is coupled to digit line 654-1, and a second source/drain region of transistor 656-2 is coupled to digit line 654-2. Gates of transistors 658, 656-1, and 656-2 can be coupled together and to an equilibration (EQ) control signal line 657. As such, activating EQ enables the transistors 658, 656-1, and 656-2, which effectively shorts digit lines 654-1 and 654-2 together and to the equilibration voltage (e.g., VDD/2). Although
As shown in
An example of pseudo code associated with loading (e.g., copying) a first data value stored in a cell coupled to row 662-X into the accumulator can be summarized as follows:
In the pseudo code above, “Deactivate EQ” indicates that an equilibration signal (EQ signal shown in
After Row X is enabled, in the pseudo code above, “Fire Sense Amps” indicates that the sense amplifier 654 is enabled to set the primary latch, as has been described herein, and subsequently disabled. For example, as shown at t3 in
The four sets of possible sense amplifier and accumulator signals illustrated in
After firing the sense amps, in the pseudo code above, “Activate LOAD” indicates that the LOAD control signal goes high as shown at t4 in
After setting the secondary latch from the data values stored in the sense amplifier (and present on the data lines 653-1 (DIGIT(n) or 653-2 (DIGIT(n)_ in
After storing the data value on the secondary latch, the selected row (e.g., ROW X) is disabled (e.g., deselected, closed such as by deactivating a select signal for a particular row) as indicated by “Close Row X” and indicated at t6 in
A subsequent operation phase associated with performing the AND or the OR operation on the first data value (now stored in the sense amplifier 654 and the secondary latch of the compute component 665 shown in
In the pseudo code above, “Deactivate EQ” indicates that an equilibration signal corresponding to the sense amplifier 654 is disabled (e.g., such that the complementary data lines 653-1 DIGIT(n) or 653-2 DIGIT(n)_ are no longer shorted to VDD/2), which is illustrated in
After Row Y is enabled, in the pseudo code above, “Fire Sense Amps” indicates that the sense amplifier 654 is enabled to amplify the differential signal between 653-1 (DIGIT(n)) or 653-2 (DIGIT(n)_), resulting in a voltage (e.g., VDD) corresponding to a logic 1 or a voltage (e.g., GND) corresponding to a logic 0 being on data line 653-1 (DIGIT(n)). The voltage corresponding to the other logic state is on complementary data line 653-2 (DIGIT(n)_). As shown at t10 in
After the second data value sensed from the memory cell coupled to Row Y is stored in the primary latch of sense amplifier 206-2, in the pseudo code above, “Close Row Y” indicates that the selected row (e.g., ROW Y) can be disabled if it is not desired to store the result of the AND logical operation back in the memory cell corresponding to Row Y. However,
After the selected Row Y is configured (e.g., to isolate the memory cell or not isolate the memory cell), “Activate AND” in the pseudo code above indicates that the AND control signal goes high as shown in
With the first data value (e.g., Row X) stored in the dynamic latch of the accumulator 665 and the second data value (e.g., Row Y) stored in the sense amplifier 654, if the dynamic latch of the compute component 665 contains a “0” (i.e., a voltage corresponding to a “0” on node S* and a voltage corresponding to a “1” on node S), the sense amplifier data is written to a “0” (regardless of the data value previously stored in the sense amp). This is because the voltage corresponding to a “1” on node S causes transistor 661-1 to conduct thereby coupling the sense amplifier 654 to ground through transistor 661-1, pass transistor 655-1 and data line 653-1 (D). When either data value of an AND operation is “0,” the result is a “0.” Here, when the second data value (in the dynamic latch) is a “0,” the result of the AND operation is a “0” regardless of the state of the first data value. Thus the configuration of the sensing circuitry causes the “0” result to be written and initially stored in the sense amplifier 654. This operation leaves the data value in the accumulator unchanged (e.g., from Row X).
If the secondary latch of the accumulator contains a “1” (e.g., from Row X), then the result of the AND operation depends on the data value stored in the sense amplifier 654 (e.g., from Row Y). The result of the AND operation should be a “1” if the data value stored in the sense amplifier 654 (e.g., from Row Y) is also a “1,” but the result of the AND operation should be a “0” if the data value stored in the sense amplifier 654 (e.g., from Row Y) is a “0.” The sensing circuitry 650 is configured such that if the dynamic latch of the accumulator contains a “1” (i.e., a voltage corresponding to a “1” on node S* and a voltage corresponding to a “0” on node S), transistor 661-1 does not conduct, the sense amplifier is not coupled to ground (as described above), and the data value previously stored in the sense amplifier 654 remains unchanged (e.g., Row Y data value so the AND operation result is a “1” if the Row Y data value is a “1” and the AND operation result is a “0” if the Row Y data value is a “0”). This operation leaves the data value in the accumulator unchanged (e.g., from Row X).
After the result of the AND operation is initially stored in the sense amplifier 654, “Deactivate AND” in the pseudo code above indicates that the AND control signal goes low as shown at t12 in
Although the timing diagrams illustrated in
A subsequent operation phase can alternately be associated with performing the OR operation on the first data value (now stored in the sense amplifier 654 and the secondary latch of the compute component 665) and the second data value (stored in a memory cell coupled to Row Y 662-Y). The operations to load the Row X data into the sense amplifier and accumulator that were previously described with respect to times t1-t7 shown in
The “Deactivate EQ” (shown at t8 in
With the first data value (e.g., Row X) stored in the secondary latch of the compute component 665 and the second data value (e.g., Row Y) stored in the sense amplifier 654, if the dynamic latch of the accumulator contains a “0” (i.e., a voltage corresponding to a “0” on node S* and a voltage corresponding to a “1” on node S), then the result of the OR operation depends on the data value stored in the sense amplifier 654 (e.g., from Row Y). The result of the OR operation should be a “1” if the data value stored in the sense amplifier 654 (e.g., from Row Y) is a “1,” but the result of the OR operation should be a “0” if the data value stored in the sense amplifier 654 (e.g., from Row Y) is also a “0.” The sensing circuitry 650 is configured such that if the dynamic latch of the accumulator contains a “0,” with the voltage corresponding to a “0” on node S*, transistor 661-2 is off and does not conduct (and pass transistor 655-1 is also off since the AND control signal is not asserted) so the sense amplifier 654 is not coupled to ground (either side), and the data value previously stored in the sense amplifier 654 remains unchanged (e.g., Row Y data value such that the OR operation result is a “1” if the Row Y data value is a “1” and the OR operation result is a “0” if the Row Y data value is a “0”).
If the dynamic latch of the accumulator contains a “1” (i.e., a voltage corresponding to a “1” on node S* and a voltage corresponding to a “0” on node S), transistor 661-2 does conduct (as does pass transistor 655-2 since the OR control signal is asserted), and the sense amplifier 654 input coupled to data line 205-2 (D_) is coupled to ground since the voltage corresponding to a “1” on node S2 causes transistor 661-2 to conduct along with pass transistor 655-2 (which also conducts since the OR control signal is asserted). In this manner, a “1” is initially stored in the sense amplifier 654 as a result of the OR operation when the secondary latch of the accumulator contains a “1” regardless of the data value previously stored in the sense amp. This operation leaves the data in the accumulator unchanged.
After the result of the OR operation is initially stored in the sense amplifier 654, “Deactivate OR” in the pseudo code above indicates that the OR control signal goes low as shown at t12 in
The sensing circuitry 650 illustrated in
In a similar approach to that described above with respect to inverting the data values for the AND and OR operations described above, the sensing circuitry shown in
The “Deactivate EQ,” “Open Row X,” “Fire Sense Amps,” “Activate LOAD,” and “Deactivate LOAD” shown in the pseudo code above indicate the same functionality as the same operations in the pseudo code for the “Copy Row X into the Accumulator” initial operation phase described above for the AND operation and OR operation. However, rather than closing the Row X and Precharging after the Row X data is loaded into the sense amplifier 654 and copied into the dynamic latch, a complement version of the data value in the dynamic latch of the accumulator can be placed on the data line and thus transferred to the sense amplifier 654. This is done by enabling (e.g., causing transistor to conduct) and disabling the invert transistors (e.g., ANDinv and ORinv). This results in the sense amplifier 654 being flipped from the true data value that was previously stored in the sense amplifier to a complement data value (e.g., inverted data value) being stored in the sense amp. As such, a true or complement version of the data value in the accumulator can be transferred to the sense amplifier based upon activating or not activating ANDinv and/or ORinv. This operation leaves the data in the accumulator unchanged.
Because the sensing circuitry 650 shown in
When performing logical operations in this manner, the sense amplifier 654 can be pre-seeded with a data value from the dynamic latch of the accumulator to reduce overall current utilized because the sense amps 654 are not at full rail voltages (e.g., supply voltage or ground/reference voltage) when accumulator function is copied to the sense amplifier 654. An operation sequence with a pre-seeded sense amplifier 654 either forces one of the data lines to the reference voltage (leaving the complementary data line at VDD/2, or leaves the complementary data lines unchanged. The sense amplifier 654 pulls the respective data lines to full rails when the sense amplifier 654 fires. Using this sequence of operations will overwrite data in an enabled row.
The logic table illustrated in
Via selective control of the continuity of the pass gates 655-1 and 655-2 and the swap transistors, each of the three columns of the first set of two rows of the upper portion of the logic table of
The columns of the lower portion of the logic table illustrated in
As such, the sensing circuitry shown in
According to various embodiments, general computing can be enabled in a memory array core of a processor-in-memory (PIM) device such as a DRAM one transistor per memory cell (e.g., 1T1C) configuration at 6F{circumflex over ( )}2 or 4F{circumflex over ( )}2 memory cell sizes, for example. A potential advantage of the apparatuses and methods described herein may not be realized in terms of single instruction speed, but rather the cumulative speed that can be achieved by an entire bank of data being computed in parallel without necessarily transferring data out of the memory array (e.g., DRAM) or firing a column decode. For instance, data transfer time can be reduced or eliminated. As an example, apparatuses of the present disclosure can perform ANDs, ORs, or SHIFTs in parallel (e.g., concurrently), using data values in memory cells coupled to a data line (e.g., a column of 16K memory cells).
A signed division operation can be performed in parallel without transferring data out of the array via an I/O line. Further, previous approaches included sensing circuits where data is moved out for logical operation processing (e.g., using 32 or 64 bit registers) and included fewer operations being performed in parallel compared to the apparatus of the present disclosure. In this manner, significantly higher throughput is effectively provided along with more efficient use of avoiding transferring data out of the array by ensuring the data is stored in such a way to perform operations on the data in parallel. An apparatus and/or methods according to the present disclosure can also use less energy/area than configurations where the logical operation is discrete from the memory. Furthermore, an apparatus and/or methods of the present disclosure can provide additional energy/area advantages since the in-memory-array logical operations eliminate certain data value transfers.
At block 984, the method 980 can include transferring the data values into respective latches of compute components coupled to respective sense amplifiers among the plurality of sense amplifiers. The compute components can be analogous to the compute component 665 illustrated in
At block 986, the method can include determining a result of the arithmetic operation, the logical operation, or both using the compute component. In some embodiments, the method 980 can include determining the result of the arithmetic operation, the logical operation, or both without performing a sense line address access. For example, the compute component can be configured to perform operations to determine the result of the arithmetic operation, the logical operation, or both without enabling access lines to transfer data to or from circuitry external to a memory array in which the compute component is deployed. In some embodiments, the compute component can be configured to perform operations to determine the result of the arithmetic operation, the logical operation, or both prior to receiving an access command and/or an address for a sense line associated with the compute component or sense amplifiers.
The method 980 can further include comprising storing the result of the arithmetic operation, the logical operation, or both to an array of memory cells without activating input/output lines coupled to the plurality of sense amplifiers. For example, the method 980 can include storing the result of the arithmetic operation, the logical operation, or both in a memory array (e.g., the memory array 130/230 illustrated in
In some embodiments, the bit string can be used as a first operand in performing the arithmetic operation, the logical operation, and the method 980 can include determining that the bit string has a same quantity of bits or a same data type as a bit string that is used as a second operand in performing the arithmetic operation, the logical operation, or both, and performing the arithmetic operation, the logical operation, or both using the first operand and the second operand. For example, the method 980 can include determining that the bit string has a same bit string shape as a bit string used as a second operand in performing the arithmetic operation, the logical operation, or both, as described above.
The method 980 can further include storing the result of the arithmetic operation, the logical operation, or both in a plurality of storage locations that are in a periphery region of the plurality of sense amplifiers and the compute components, wherein the result of the arithmetic operation, the logical operation, or both is an exact result of the arithmetic operation, the logical operation, or both. For example, the method 980 can include storing exact (e.g., un-rounded) intermediate results of a recursive operation performed using the bit string(s) in a periphery location of the memory array while subsequent iterations of the recursive operation are performed.
Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.
In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
This application is a Divisional of U.S. application Ser. No. 16/540,329, filed Aug. 14, 2019, which issued as U.S. Pat. No. 11,360,768 on Jun. 14, 2022, the contents of which are included herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4380046 | Fung | Apr 1983 | A |
4435792 | Bechtolsheim | Mar 1984 | A |
4435793 | Ochii | Mar 1984 | A |
4727474 | Batcher | Feb 1988 | A |
4758972 | Frazier | Jul 1988 | A |
4843264 | Galbraith | Jun 1989 | A |
4958378 | Bell | Sep 1990 | A |
4977542 | Matsuda et al. | Dec 1990 | A |
5023838 | Herbert | Jun 1991 | A |
5034636 | Reis et al. | Jul 1991 | A |
5201039 | Sakamura | Apr 1993 | A |
5210850 | Kelly et al. | May 1993 | A |
5253308 | Johnson | Oct 1993 | A |
5276643 | Hoffmann et al. | Jan 1994 | A |
5325519 | Long et al. | Jun 1994 | A |
5367488 | An | Nov 1994 | A |
5379257 | Matsumura et al. | Jan 1995 | A |
5386379 | Ali-Yahia et al. | Jan 1995 | A |
5398213 | Yeon et al. | Mar 1995 | A |
5440482 | Davis | Aug 1995 | A |
5446690 | Tanaka et al. | Aug 1995 | A |
5473576 | Matsui | Dec 1995 | A |
5481500 | Reohr et al. | Jan 1996 | A |
5485373 | Davis et al. | Jan 1996 | A |
5506811 | McLaury | Apr 1996 | A |
5615404 | Knoll et al. | Mar 1997 | A |
5638128 | Hoogenboom | Jun 1997 | A |
5638317 | Tran | Jun 1997 | A |
5654936 | Cho | Aug 1997 | A |
5678021 | Pawate et al. | Oct 1997 | A |
5724291 | Matano | Mar 1998 | A |
5724366 | Furutani | Mar 1998 | A |
5751987 | Mahant-Shetti et al. | May 1998 | A |
5787458 | Miwa | Jul 1998 | A |
5854636 | Watanabe et al. | Dec 1998 | A |
5867429 | Chen et al. | Feb 1999 | A |
5870504 | Nemoto et al. | Feb 1999 | A |
5915084 | Wendell | Jun 1999 | A |
5935263 | Keeth et al. | Aug 1999 | A |
5986942 | Sugibayashi | Nov 1999 | A |
5991209 | Chow | Nov 1999 | A |
5991785 | Alidina et al. | Nov 1999 | A |
6005799 | Rao | Dec 1999 | A |
6009020 | Nagata | Dec 1999 | A |
6092186 | Betker et al. | Jul 2000 | A |
6122211 | Morgan et al. | Sep 2000 | A |
6125071 | Kohno et al. | Sep 2000 | A |
6134164 | Lattimore et al. | Oct 2000 | A |
6147514 | Shiratake | Nov 2000 | A |
6151244 | Fujino et al. | Nov 2000 | A |
6157578 | Brady | Dec 2000 | A |
6163862 | Adams et al. | Dec 2000 | A |
6166942 | Vo et al. | Dec 2000 | A |
6172918 | Hidaka | Jan 2001 | B1 |
6175514 | Henderson | Jan 2001 | B1 |
6181698 | Hariguchi | Jan 2001 | B1 |
6208544 | Beadle et al. | Mar 2001 | B1 |
6226215 | Yoon | May 2001 | B1 |
6301153 | Takeuchi et al. | Oct 2001 | B1 |
6301164 | Manning et al. | Oct 2001 | B1 |
6304477 | Naji | Oct 2001 | B1 |
6389507 | Sherman | May 2002 | B1 |
6418498 | Martwick | Jul 2002 | B1 |
6466499 | Blodgett | Oct 2002 | B1 |
6507877 | Ross | Jan 2003 | B1 |
6510098 | Taylor | Jan 2003 | B1 |
6563754 | Lien et al. | May 2003 | B1 |
6578058 | Nygaard | Jun 2003 | B1 |
6731542 | Le et al. | May 2004 | B1 |
6754746 | Leung et al. | Jun 2004 | B1 |
6768679 | Le et al. | Jul 2004 | B1 |
6807614 | Chung | Oct 2004 | B2 |
6816422 | Hamade et al. | Nov 2004 | B2 |
6819612 | Achter | Nov 2004 | B1 |
6894549 | Eliason | May 2005 | B2 |
6943579 | Hazanchuk et al. | Sep 2005 | B1 |
6948056 | Roth et al. | Sep 2005 | B1 |
6950771 | Fan et al. | Sep 2005 | B1 |
6950898 | Merritt et al. | Sep 2005 | B2 |
6956770 | Khalid et al. | Oct 2005 | B2 |
6961272 | Schreck | Nov 2005 | B2 |
6965648 | Smith et al. | Nov 2005 | B1 |
6985394 | Kim | Jan 2006 | B2 |
6987693 | Cernea et al. | Jan 2006 | B2 |
7020017 | Chen et al. | Mar 2006 | B2 |
7028170 | Saulsbury | Apr 2006 | B2 |
7045834 | Tran et al. | May 2006 | B2 |
7054178 | Shiah et al. | May 2006 | B1 |
7061817 | Raad et al. | Jun 2006 | B2 |
7079407 | Dimitrelis | Jul 2006 | B1 |
7173857 | Kato et al. | Feb 2007 | B2 |
7187585 | Li et al. | Mar 2007 | B2 |
7196928 | Chen | Mar 2007 | B2 |
7260565 | Lee et al. | Aug 2007 | B2 |
7260672 | Gamey | Aug 2007 | B2 |
7372715 | Han | May 2008 | B2 |
7400532 | Aritome | Jul 2008 | B2 |
7406494 | Magee | Jul 2008 | B2 |
7447720 | Beaumont | Nov 2008 | B2 |
7454451 | Beaumont | Nov 2008 | B2 |
7457181 | Lee et al. | Nov 2008 | B2 |
7535769 | Cernea | May 2009 | B2 |
7546438 | Chung | Jun 2009 | B2 |
7562198 | Noda et al. | Jul 2009 | B2 |
7574466 | Beaumont | Aug 2009 | B2 |
7602647 | Li et al. | Oct 2009 | B2 |
7663928 | Tsai et al. | Feb 2010 | B2 |
7685365 | Rajwar et al. | Mar 2010 | B2 |
7692466 | Ahmadi | Apr 2010 | B2 |
7752417 | Manczak et al. | Jul 2010 | B2 |
7791962 | Noda et al. | Sep 2010 | B2 |
7796453 | Riho et al. | Sep 2010 | B2 |
7805587 | Van Dyke et al. | Sep 2010 | B1 |
7808854 | Takase | Oct 2010 | B2 |
7827372 | Bink et al. | Nov 2010 | B2 |
7865541 | Langhammer | Jan 2011 | B1 |
7869273 | Lee et al. | Jan 2011 | B2 |
7898864 | Dong | Mar 2011 | B2 |
7924628 | Danon et al. | Apr 2011 | B2 |
7937535 | Ozer et al. | May 2011 | B2 |
7957206 | Bauser | Jun 2011 | B2 |
7979667 | Allen et al. | Jul 2011 | B2 |
7996749 | Ding et al. | Aug 2011 | B2 |
8042082 | Solomon | Oct 2011 | B2 |
8045391 | Mokhlesi | Oct 2011 | B2 |
8059438 | Chang et al. | Nov 2011 | B2 |
8095825 | Hirotsu et al. | Jan 2012 | B2 |
8117462 | Snapp et al. | Feb 2012 | B2 |
8164942 | Gebara et al. | Apr 2012 | B2 |
8208328 | Hong | Jun 2012 | B2 |
8213248 | Moon et al. | Jul 2012 | B2 |
8214417 | Ahmed | Jul 2012 | B2 |
8223568 | Seo | Jul 2012 | B2 |
8238173 | Akerib et al. | Aug 2012 | B2 |
8274841 | Shimano et al. | Sep 2012 | B2 |
8279683 | Klein | Oct 2012 | B2 |
8310884 | Iwai et al. | Nov 2012 | B2 |
8332367 | Bhattacherjee et al. | Dec 2012 | B2 |
8339824 | Cooke | Dec 2012 | B2 |
8339883 | Yu et al. | Dec 2012 | B2 |
8347154 | Bahali et al. | Jan 2013 | B2 |
8351292 | Matano | Jan 2013 | B2 |
8356144 | Hessel et al. | Jan 2013 | B2 |
8417921 | Gonion et al. | Apr 2013 | B2 |
8462532 | Argyres | Jun 2013 | B1 |
8484276 | Carlson et al. | Jul 2013 | B2 |
8495438 | Roine | Jul 2013 | B2 |
8503250 | Demone | Aug 2013 | B2 |
8526239 | Kim | Sep 2013 | B2 |
8533245 | Cheung | Sep 2013 | B1 |
8555037 | Gonion | Oct 2013 | B2 |
8599613 | Abiko et al. | Dec 2013 | B2 |
8605015 | Guttag et al. | Dec 2013 | B2 |
8625376 | Jung et al. | Jan 2014 | B2 |
8644101 | Jun et al. | Feb 2014 | B2 |
8650232 | Stortz et al. | Feb 2014 | B2 |
8873272 | Lee | Oct 2014 | B2 |
8964496 | Manning | Feb 2015 | B2 |
8971124 | Manning | Mar 2015 | B1 |
9015390 | Klein | Apr 2015 | B2 |
9047193 | Lin et al. | Jun 2015 | B2 |
9165023 | Moskovich et al. | Oct 2015 | B2 |
9659605 | Zawodny et al. | May 2017 | B1 |
9659610 | Hush | May 2017 | B1 |
9697876 | Tiwari et al. | Jul 2017 | B1 |
9761300 | Willcock | Sep 2017 | B1 |
9997212 | Finkbeiner et al. | Jun 2018 | B1 |
10068664 | Penney et al. | Sep 2018 | B1 |
20010007112 | Porterfield | Jul 2001 | A1 |
20010008492 | Higashiho | Jul 2001 | A1 |
20010010057 | Yamada | Jul 2001 | A1 |
20010028584 | Nakayama et al. | Oct 2001 | A1 |
20010043089 | Forbes et al. | Nov 2001 | A1 |
20020059355 | Peleg et al. | May 2002 | A1 |
20030167426 | Slobodnik | Sep 2003 | A1 |
20030222879 | Lin et al. | Dec 2003 | A1 |
20040073592 | Kim et al. | Apr 2004 | A1 |
20040073773 | Demjanenko | Apr 2004 | A1 |
20040085840 | Vali et al. | May 2004 | A1 |
20040095826 | Perner | May 2004 | A1 |
20040154002 | Ball et al. | Aug 2004 | A1 |
20040205289 | Srinivasan | Oct 2004 | A1 |
20040240251 | Nozawa et al. | Dec 2004 | A1 |
20050015557 | Wang et al. | Jan 2005 | A1 |
20050078514 | Scheuerlein et al. | Apr 2005 | A1 |
20050097417 | Agrawal et al. | May 2005 | A1 |
20060047937 | Selvaggi et al. | Mar 2006 | A1 |
20060069849 | Rudelic | Mar 2006 | A1 |
20060146623 | Mizuno et al. | Jul 2006 | A1 |
20060149804 | Luick et al. | Jul 2006 | A1 |
20060181917 | Kang et al. | Aug 2006 | A1 |
20060215432 | Wickeraad et al. | Sep 2006 | A1 |
20060225072 | Lari et al. | Oct 2006 | A1 |
20060291282 | Liu et al. | Dec 2006 | A1 |
20070103986 | Chen | May 2007 | A1 |
20070171747 | Hunter et al. | Jul 2007 | A1 |
20070180006 | Gyoten et al. | Aug 2007 | A1 |
20070180184 | Sakashita et al. | Aug 2007 | A1 |
20070195602 | Fong et al. | Aug 2007 | A1 |
20070285131 | Sohn | Dec 2007 | A1 |
20070285979 | Turner | Dec 2007 | A1 |
20070291532 | Tsuji | Dec 2007 | A1 |
20080025073 | Arsovski | Jan 2008 | A1 |
20080037333 | Kim et al. | Feb 2008 | A1 |
20080052711 | Forin et al. | Feb 2008 | A1 |
20080137388 | Krishnan et al. | Jun 2008 | A1 |
20080165601 | Matick et al. | Jul 2008 | A1 |
20080178053 | Gorman et al. | Jul 2008 | A1 |
20080215937 | Dreibelbis et al. | Sep 2008 | A1 |
20090067218 | Graber | Mar 2009 | A1 |
20090154238 | Lee | Jun 2009 | A1 |
20090154273 | Borot et al. | Jun 2009 | A1 |
20090254697 | Akerib | Oct 2009 | A1 |
20100067296 | Li | Mar 2010 | A1 |
20100091582 | Vali et al. | Apr 2010 | A1 |
20100172190 | Lavi et al. | Jul 2010 | A1 |
20100210076 | Gruber et al. | Aug 2010 | A1 |
20100226183 | Kim | Sep 2010 | A1 |
20100308858 | Noda et al. | Dec 2010 | A1 |
20100332895 | Billing et al. | Dec 2010 | A1 |
20110051523 | Manabe et al. | Mar 2011 | A1 |
20110063919 | Chandrasekhar et al. | Mar 2011 | A1 |
20110093662 | Walker et al. | Apr 2011 | A1 |
20110103151 | Kim et al. | May 2011 | A1 |
20110119467 | Cadambi et al. | May 2011 | A1 |
20110122695 | Li et al. | May 2011 | A1 |
20110140741 | Zerbe et al. | Jun 2011 | A1 |
20110219260 | Nobunaga et al. | Sep 2011 | A1 |
20110267883 | Lee et al. | Nov 2011 | A1 |
20110317496 | Bunce et al. | Dec 2011 | A1 |
20120005397 | Lim et al. | Jan 2012 | A1 |
20120017039 | Margetts | Jan 2012 | A1 |
20120023281 | Kawasaki et al. | Jan 2012 | A1 |
20120120705 | Mitsubori et al. | May 2012 | A1 |
20120134216 | Singh | May 2012 | A1 |
20120134225 | Chow | May 2012 | A1 |
20120134226 | Chow | May 2012 | A1 |
20120140540 | Agam et al. | Jun 2012 | A1 |
20120182798 | Hosono et al. | Jul 2012 | A1 |
20120195146 | Jun et al. | Aug 2012 | A1 |
20120198310 | Tran et al. | Aug 2012 | A1 |
20120246380 | Akerib et al. | Sep 2012 | A1 |
20120265964 | Murata et al. | Oct 2012 | A1 |
20120281486 | Rao et al. | Nov 2012 | A1 |
20120303627 | Keeton et al. | Nov 2012 | A1 |
20130003467 | Klein | Jan 2013 | A1 |
20130061006 | Hein | Mar 2013 | A1 |
20130107623 | Kavalipurapu et al. | May 2013 | A1 |
20130117541 | Choquette et al. | May 2013 | A1 |
20130124783 | Yoon et al. | May 2013 | A1 |
20130132702 | Patel et al. | May 2013 | A1 |
20130138646 | Sirer et al. | May 2013 | A1 |
20130163362 | Kim | Jun 2013 | A1 |
20130173888 | Hansen et al. | Jul 2013 | A1 |
20130205114 | Badam et al. | Aug 2013 | A1 |
20130219112 | Okin et al. | Aug 2013 | A1 |
20130227361 | Bowers et al. | Aug 2013 | A1 |
20130283122 | Anholt et al. | Oct 2013 | A1 |
20130286705 | Grover et al. | Oct 2013 | A1 |
20130326154 | Haswell | Dec 2013 | A1 |
20130332707 | Gueron et al. | Dec 2013 | A1 |
20140185395 | Seo | Jul 2014 | A1 |
20140215185 | Danielsen | Jul 2014 | A1 |
20140247650 | Kobayashi et al. | Sep 2014 | A1 |
20140250279 | Manning | Sep 2014 | A1 |
20140344934 | Jorgensen | Nov 2014 | A1 |
20150029798 | Manning | Jan 2015 | A1 |
20150042380 | Manning | Feb 2015 | A1 |
20150063052 | Manning | Mar 2015 | A1 |
20150078108 | Cowles et al. | Mar 2015 | A1 |
20150120987 | Wheeler | Apr 2015 | A1 |
20150134713 | Wheeler | May 2015 | A1 |
20150270015 | Murphy et al. | Sep 2015 | A1 |
20150279466 | Manning | Oct 2015 | A1 |
20150324290 | Leidel | Nov 2015 | A1 |
20150325272 | Murphy | Nov 2015 | A1 |
20150356009 | Wheeler et al. | Dec 2015 | A1 |
20150356022 | Leidel et al. | Dec 2015 | A1 |
20150357007 | Manning et al. | Dec 2015 | A1 |
20150357008 | Manning et al. | Dec 2015 | A1 |
20150357019 | Wheeler et al. | Dec 2015 | A1 |
20150357020 | Manning | Dec 2015 | A1 |
20150357021 | Hush | Dec 2015 | A1 |
20150357022 | Hush | Dec 2015 | A1 |
20150357023 | Hush | Dec 2015 | A1 |
20150357024 | Hush et al. | Dec 2015 | A1 |
20150357047 | Tiwari | Dec 2015 | A1 |
20160062672 | Wheeler | Mar 2016 | A1 |
20160062673 | Tiwari | Mar 2016 | A1 |
20160062692 | Finkbeiner et al. | Mar 2016 | A1 |
20160062733 | Tiwari | Mar 2016 | A1 |
20160063284 | Tiwari | Mar 2016 | A1 |
20160064045 | La Fratta | Mar 2016 | A1 |
20160064047 | Tiwari | Mar 2016 | A1 |
20160098208 | Willcock | Apr 2016 | A1 |
20160098209 | Leidel et al. | Apr 2016 | A1 |
20160110135 | Wheeler et al. | Apr 2016 | A1 |
20160125919 | Hush | May 2016 | A1 |
20160154596 | Willcock et al. | Jun 2016 | A1 |
20160155482 | La Fratta | Jun 2016 | A1 |
20160188250 | Wheeler | Jun 2016 | A1 |
20160196142 | Wheeler et al. | Jul 2016 | A1 |
20160196856 | Tiwari et al. | Jul 2016 | A1 |
20160225422 | Tiwari et al. | Aug 2016 | A1 |
20160266873 | Tiwari et al. | Sep 2016 | A1 |
20160266899 | Tiwari | Sep 2016 | A1 |
20160267951 | Tiwari | Sep 2016 | A1 |
20160292080 | Leidel et al. | Oct 2016 | A1 |
20160306584 | Zawodny et al. | Oct 2016 | A1 |
20160306614 | Leidel et al. | Oct 2016 | A1 |
20160350230 | Murphy | Dec 2016 | A1 |
20160365129 | Willcock | Dec 2016 | A1 |
20160371033 | La Fratta et al. | Dec 2016 | A1 |
20170052906 | Lea | Feb 2017 | A1 |
20170178701 | Willcock et al. | Jun 2017 | A1 |
20170192844 | Lea et al. | Jul 2017 | A1 |
20170228192 | Willcock et al. | Aug 2017 | A1 |
20170235515 | Lea et al. | Aug 2017 | A1 |
20170236564 | Zawodny et al. | Aug 2017 | A1 |
20170242902 | Crawford et al. | Aug 2017 | A1 |
20170243623 | Kirsch et al. | Aug 2017 | A1 |
20170262369 | Murphy | Sep 2017 | A1 |
20170263306 | Murphy | Sep 2017 | A1 |
20170269865 | Willcock et al. | Sep 2017 | A1 |
20170269903 | Tiwari | Sep 2017 | A1 |
20170277433 | Willcock | Sep 2017 | A1 |
20170277440 | Willcock | Sep 2017 | A1 |
20170277581 | Lea et al. | Sep 2017 | A1 |
20170277637 | Willcock et al. | Sep 2017 | A1 |
20170278559 | Hush | Sep 2017 | A1 |
20170278584 | Rosti | Sep 2017 | A1 |
20170285988 | Dobelstein | Oct 2017 | A1 |
20170293434 | Tiwari | Oct 2017 | A1 |
20170301379 | Hush | Oct 2017 | A1 |
20170309314 | Zawodny et al. | Oct 2017 | A1 |
20170329577 | Tiwari | Nov 2017 | A1 |
20170336989 | Zawodny et al. | Nov 2017 | A1 |
20170337126 | Zawodny et al. | Nov 2017 | A1 |
20170337953 | Zawodny et al. | Nov 2017 | A1 |
20170352391 | Hush | Dec 2017 | A1 |
20170371539 | Mai et al. | Dec 2017 | A1 |
20180012636 | Alzheimer et al. | Jan 2018 | A1 |
20180024769 | Howe et al. | Jan 2018 | A1 |
20180024926 | Penney et al. | Jan 2018 | A1 |
20180025759 | Penney et al. | Jan 2018 | A1 |
20180025768 | Hush | Jan 2018 | A1 |
20180032458 | Bell | Feb 2018 | A1 |
20180033478 | Lea et al. | Feb 2018 | A1 |
20180039484 | La Fratta et al. | Feb 2018 | A1 |
20180046405 | Hush et al. | Feb 2018 | A1 |
20180046461 | Tiwari | Feb 2018 | A1 |
20180060069 | Rosti et al. | Mar 2018 | A1 |
20180074754 | Crawford | Mar 2018 | A1 |
20180075899 | Hush | Mar 2018 | A1 |
20180082747 | Tamura | Mar 2018 | A1 |
20180088850 | Willcock | Mar 2018 | A1 |
20180102147 | Willcock et al. | Apr 2018 | A1 |
20180108397 | Venkata et al. | Apr 2018 | A1 |
20180130515 | Zawodny et al. | May 2018 | A1 |
20180136871 | Leidel | May 2018 | A1 |
20180239531 | Lea | Aug 2018 | A1 |
20180239712 | Lea | Aug 2018 | A1 |
20180240510 | Hush et al. | Aug 2018 | A1 |
20190138440 | Lee | May 2019 | A1 |
Number | Date | Country |
---|---|---|
102141905 | Aug 2011 | CN |
0214718 | Mar 1987 | EP |
2026209 | Feb 2009 | EP |
H0831168 | Feb 1996 | JP |
2009259193 | Mar 2015 | JP |
10-0211482 | Aug 1999 | KR |
10-2010-0134235 | Dec 2010 | KR |
10-2013-0049421 | May 2013 | KR |
2001065359 | Sep 2001 | WO |
2010079451 | Jul 2010 | WO |
2013062596 | May 2013 | WO |
2013081588 | Jun 2013 | WO |
2013095592 | Jun 2013 | WO |
Entry |
---|
Dybdahl, et al., “Destructive-Read in Embedded DRAM, Impact on Power Consumption,” Apr. 2006, (10 pgs.), vol. 2, Issue 2, Journal of Embedded Computing-Issues in embedded single-chip multicore architectures. |
Kogge, et al., “Processing In Memory: Chips to Petaflops,” May 23, 1997, (8 pgs.), retrieved from: http://www.cs.ucf.edu/courses/cda5106/summer02/papers/kogge97PIM.pdf. |
Draper, et al., “The Architecture of the DIVA Processing-In-Memory Chip,” Jun. 22-26, 2002, (12 pgs.), ICS '02, retrieved from: http://www.isi.edu/˜draper/papers/ics02.pdf. |
Adibi, et al., “Processing-In-Memory Technology for Knowledge Discovery Algorithms,” Jun. 25, 2006, (10 pgs.), Proceeding of the Second International Workshop on Data Management on New Hardware, retrieved from: http://www.cs.cmu.edu/˜damon2006/pdf/adibi06inmemory.pdf. |
U.S. Appl. No. 13/449,082, entitled, “Methods and Apparatus for Pattern Matching,” filed Apr. 17, 2012, (37 pgs.). |
U.S. Appl. No. 13/743,686, entitled, “Weighted Search and Compare in a Memory Device,” filed Jan. 17, 2013, (25 pgs.). |
U.S. Appl. No. 13/774,636, entitled, “Memory as a Programmable Logic Device,” filed Feb. 22, 2013, (30 pgs.). |
U.S. Appl. No. 13/774,553, entitled, “Neural Network in a Memory Device,” filed Feb. 22, 2013, (63 pgs.). |
U.S. Appl. No. 13/796,189, entitled, “Performing Complex Arithmetic Functions in a Memory Device,” filed Mar. 12, 2013, (23 pgs.). |
International Search Report and Written Opinion for PCT Application No. PCT/US2013/043702, dated Sep. 26, 2013, (11 pgs.). |
Pagiamtzis, et al., “Content-Addressable Memory (CAM) Circuits and Architectures: A Tutorial and Survey”, Mar. 2006, (16 pgs.), vol. 41, No. 3, IEEE Journal of Solid-State Circuits. |
Pagiamtzis, Kostas, “Content-Addressable Memory Introduction”, Jun. 25, 2007, (6 pgs.), retrieved from: http://www.pagiamtzis.com/cam/camintro. |
Debnath, Biplob, Bloomflash: Bloom Filter on Flash-Based Storage, 2011 31st Annual Conference on Distributed Computing Systems, Jun. 20-24, 2011, 10 pgs. |
Derby, et al., “A High-Performance Embedded DSP Core with Novel SIMD Features”, Apr. 6-10, 2003, (4 pgs), vol. 2, pp. 301-304, 2003 IEEE International Conference on Accoustics, Speech, and Signal Processing. |
“4.9.3 MINLOC and MAXLOC”, Jun. 12, 1995, (5pgs.), Message Passing Interface Forum 1.1, retrieved from http://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html. |
Stojmenovic, “Multiplicative Circulant Networks Topological Properties and Communication Algorithms”, (25 pgs.), Discrete Applied Mathematics 77 (1997) 281-305. |
Boyd et al., “On the General Applicability of Instruction-Set Randomization”, Jul.-Sep. 2010, (14 pgs.), vol. 7, Issue 3, IEEE Transactions on Dependable and Secure Computing. |
Elliot, et al., “Computational RAM: Implementing Processors in Memory”, Jan.-Mar. 1999, (10 pgs.), vol. 16, Issue 1, IEEE Design and Test of Computers Magazine. |
Gustafson, et al. “Beating Floating Point at its Own Game: Posit Arithmetic”, Jan. 2017, (16 pgs), retrieved from <http://www.johngustafson.net/pdfs/BeatingFloatingPoint.pdf>. |
International Search Report and Written Opinion from related PCT Application No. PCT/US2020/043668, dated Oct. 26, 2020, 14 pages. |
Jaiswal, et al., “Universal Number Posit Arithmetic Generator on FPGA”, 2018 Design, Automation & Test in Europe Conference & Exhibition Apr. 23, 2018, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20220308875 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16540329 | Aug 2019 | US |
Child | 17838884 | US |