The present disclosure relates generally to semiconductor memory and methods, and more particularly, to apparatuses and methods for cache operations.
Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic systems. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data (e.g., host data, error data, etc.) and includes random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), synchronous dynamic random access memory (SDRAM), and thyristor random access memory (TRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), among others.
Electronic systems often include a number of processing resources (e.g., one or more processors), which may retrieve and execute instructions and store the results of the executed instructions to a suitable location. A processor can comprise a number of functional units such as arithmetic logic unit (ALU) circuitry, floating point unit (FPU) circuitry, and a combinatorial logic block, for example, which can be used to execute instructions by performing an operation on data (e.g., one or more operands). As used herein, an operation can be, for example, a Boolean operation, such as AND, OR, NOT, NOT, NAND, NOR, and XOR, and/or other operations (e.g., invert, shift, arithmetic, statistics, among many other possible operations). For example, functional unit circuitry may be used to perform the arithmetic operations, such as addition, subtraction, multiplication, and division on operands, via a number of logical operations.
A number of components in an electronic system may be involved in providing instructions to the functional unit circuitry for execution. The instructions may be executed, for instance, by a processing resource such as a controller and/or host processor. Data (e.g., the operands on which the instructions will be executed) may be stored in a memory array that is accessible by the functional unit circuitry. The instructions and/or data may be retrieved from the memory array and sequenced and/or buffered before the functional unit circuitry begins to execute instructions on the data. Furthermore, as different types of operations may be executed in one or multiple clock cycles through the functional unit circuitry, intermediate results of the instructions and/or data may also be sequenced and/or buffered. A sequence to complete an operation in one or more clock cycles may be referred to as an operation cycle. Time consumed to complete an operation cycle may cost in terms of processing and computing performance and/or power consumption of a computing apparatus and/or system.
In many instances, the processing resources (e.g., processor and associated functional unit circuitry) may be external to the memory array, and data is accessed via a bus between the processing resources and the memory array to execute a set of instructions. Processing performance may be improved in a processor-in-memory device, in which a processor may be implemented internally and/or near to a memory (e.g., directly on a same chip as the memory array). A processing-in-memory device may save time by reducing and eliminating external communications and may also conserve power.
The present disclosure includes apparatuses and methods for cache operations (e.g., for processing-in-memory (PIM) structures). In at least one embodiment, the apparatus includes a memory device including a plurality of subarrays of memory cells, where the plurality of subarrays includes a first subset of the respective plurality of subarrays and a second subset of the respective plurality of subarrays. The memory device includes sensing circuitry coupled to the first subset, the sensing circuitry including a sense amplifier and a compute component. The first subset is configured as a cache to perform operations on data moved from the second subset. The apparatus also includes a cache controller configured to direct a first movement of a data value from a subarray in the second subset to a subarray in the first subset.
The cache controller may also be configured to direct a second movement of the data value on which an operation has been performed from the subarray in the first subset to a subarray in the second subset. For example, the cache controller can be configured to direct a first movement of a data value from a subarray in the second subset to a subarray in the first subset for performance of an operation on the data value by the sensing circuitry coupled to the first subset. The cache controller also can be configured to direct performance of a second movement of the data value, on which the operation has been performed, from the subarray in the first subset, in some embodiments, back to storage in the subarray in the second subset in which the data value was previously stored.
Such a sequence of data movements and/or operations performed on the data value in the first subset (e.g., cache), rather than in the second subset (e.g., storage), is directed by a cache controller configured to do so, during a data processing operation, independently of a host. For example, although the host (e.g., 110 in
Ordinal numbers such as first and second are used herein to assist in distinguishing between similar components (e.g., subarrays of memory cells, subsets thereof, etc.) and are not used to indicate a particular ordering and/or relationship between the components, unless the context clearly dictates otherwise (e.g., by using terms such as adjacent, etc.). For example, a first subarray may be subarray 4 relative to subarray 0 in a bank of subarrays and the second subarray may be any other subsequent subarray (e.g., subarray 5, subarray 8, subarray 61, among other possibilities) or the second subarray may be any other preceding subarray (e.g., subarrays 3, 2, 1, or 0). Moreover, moving data values from a first subarray to a second subarray is provided as a non-limiting example of such data movement. For example, in some embodiments, the data values may be moved sequentially from and/or in parallel in each subarray to another subarray in a same bank (e.g., which can be an adjacent subarray and/or separated by a number of other subarrays) or a different bank.
A host system and a controller may perform the address resolution on an entire block of program instructions (e.g., PIM command instructions) and data and direct (e.g., control) allocation, storage, and/or movement (e.g., flow) of data and commands into allocated locations (e.g., subarrays and portions of subarrays) within a destination (e.g., target) bank. Writing data and executing commands (e.g., performing operations, as described herein) may utilize a normal DRAM write path to the DRAM device. As the reader will appreciate, while a DRAM-style PIM device is discussed with regard to examples presented herein, embodiments are not limited to a PIM DRAM implementation.
As described herein, embodiments can allow a host system to initially allocate a number of locations (e.g., sub-arrays (or “subarrays”)) and portions of subarrays, in one or more DRAM banks to hold (e.g., store) data (e.g., in the second subset of subarrays). However, in the interest of increased speed, rate, and/or efficiency of data processing (e.g., operations performed on the data values), the data values can be moved (e.g., copied, transferred, and/or transported) to another subarray (e.g., in the first subset of subarrays) that is configured for the increased speed, rate, and/or efficiency of data processing, as described herein.
The performance of PIM systems may be affected by memory access times (e.g., the row cycle time). An operation for data processing may include a row of memory cells in a bank being opened (accessed), the memory cells being read from and/or written to, and then the row being closed. The period of time taken for such operations may depend on the number of memory cells per compute component (e.g., compute component 231 in sensing circuitry 250 in
A memory device (e.g., a PIM DRAM memory device) is described herein as including a plurality of subarrays with at least one of the subarrays being configured with digit lines that are shorter (e.g., have fewer memory cells per column of memory cells and/or a shorter physical length of the column) than the digit lines of the other subarrays within the memory device (e.g., in the same memory bank). The subarrays with shorter digit lines may have resultant faster access times to the memory cells and the sensing circuitry may be configured with PIM functionality, as described herein, to be used in conjunction with the faster access times.
As such, the subarrays with shorter digit lines and PIM functionality can be used as a cache to perform operations at an increased speed, rate, and/or efficiency for the subarrays configured with longer digit lines (e.g., thus having slower access times). The subarrays with longer digit lines can be used for data storage to take advantage of the relatively higher number of memory cells in their longer digit lines. In some embodiments, the subarrays with the longer digit lines can be further configured for a higher density of memory cells for more efficient data storage. For example, a higher density may be contributed to by not having PIM functionality in the sensing circuitry because the operations are performed after the data values are moved to the cache rather than on the data values in storage. Alternatively or in combination, the longer digit line subarrays may be configured (e.g., formed) using a higher density memory architecture (e.g., 1T1C memory cells), while the shorter digit line subarrays may be configured using a lower density architecture (e.g., 2T2C memory cells). Other changes to the architecture may be made to increase the speed, rate, and/or efficiency of data access in shorter digit line subarrays versus longer digit line subarrays (e.g., using different memory array architectures, such as DRAM, SRAM, etc., in the short and long digit line subarrays, varying word line lengths, among other potential changes).
Accordingly, a plurality of subarrays, with a first subset of the plurality having relatively shorter digit lines and a second subset of the plurality having relatively longer digit lines, can be included in a bank of a memory device (e.g., intermixed in various embodiments, as described herein). The subarrays with the shorter digit lines may be used as caches to perform operations for the subarrays with longer digit lines. Computation (e.g., performance of the operations) may occur either primarily or only in the subarrays with the shorter digit lines, resulting in increased performance relative to the subarrays with the longer digit lines. The subarrays with longer digit lines may be used primarily or only for data storage and, as such, may be configured for memory density. In some embodiments, the subarrays with longer digit lines may be configured with at least some PIM functionality (e.g., to provide an alternative to movement of a large amount of data on which few cumulative operations would be performed in the subarrays of the first subset, among other reasons). However, it may be preferable, regardless of whether the longer digit lines may be configured with at least some PIM functionality, to move (e.g., copy, transfer, and/or transport) the data to and from the shorter digit line subarrays to perform relatively higher speed single operations and/or sequences of operations. As such, in some embodiments, only the short digit line subarrays of the first subset may have any PIM functionality, thereby possibly saving die area and/or power consumption.
For example, the rows of memory cells in a short digit line subarray may be utilized as a number of caches for the long digit line (e.g., storage) subarrays. A cache controller can manage data movement between the two types of subarrays and can store information to document data being moved from source rows of particular storage subarrays to destination rows of particular cache subarrays, and vice versa. In some embodiments, the short digit line subarrays may operate as write-back caches from which the cache controller automatically returns a data value or a series of data values after completion of an operation thereon.
A bank in a memory device might include a plurality of subarrays of memory cells in which a plurality of partitions can each include a respective grouping of the plurality of the subarrays. In various embodiments, an I/O line shared by a plurality of partitions (e.g., a data bus for inter-partition and/or intra-partition data movement, as described herein) can be configured to separate the plurality of subarrays into the plurality of partitions by selectably connecting and disconnecting the partitions using isolation circuitry associated with the shared I/O line to form separate portions of the shared I/O line. As such, a shared I/O line associated with isolation circuitry at a plurality of locations along its length can be used to separate the partitions of subarrays into effectively separate blocks in various combinations (e.g., numbers of subarrays in each partition, depending on whether various subarrays and/or partitions are connected via the portions of shared I/O line, etc., as directed by a controller). This can enable block data movement within individual partitions to occur substantially in parallel.
Isolation of the partitions can increase speed, rate, and/or efficiency of data movement within each partition and in a combination of a plurality of partitions (e.g., some or all the partitions) by the data movements being performed in parallel (e.g., substantially at the same point in time) in each partition or combinations of partitions. This can, for example, reduce time otherwise spent moving (e.g., copying, transferring, and/or transporting) data sequentially between various short and/or long digit line subarrays selectably coupled along a shared I/O line in an array of memory cells. The parallel nature of such data movement may allow for local movement of all or most of the data values in the subarrays of the partitions such that the movement may be several times faster. For example, the movement may be faster by a factor approximating the number of partitions (e.g., with four partitions, parallel movement of the data values in the subarrays of each partition may be performed in approximately one-fourth the time taken without using the partitions described herein).
In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and structural changes may be made without departing from the scope of the present disclosure.
As used herein, designators such as “X”, “Y”, “N”, “M”, etc., particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used herein, the singular forms “a”, “an”, and “the” can include both singular and plural referents, unless the context clearly dictates otherwise. In addition, “a number of”, “at least one”, and “one or more” (e.g., a number of memory arrays) can refer to one or more memory arrays, whereas a “plurality of” is intended to refer to more than one of such things. Furthermore, the words “can” and “may” are used throughout this application in a permissive sense (i.e., having the potential to, being able to), not in a mandatory sense (i.e., must). The term “include,” and derivations thereof, means “including, but not limited to”. The terms “coupled” and “coupling” mean to be directly or indirectly connected physically or for access to and movement (transmission) of commands and data, as appropriate to the context. The terms “data” and “data values” are used interchangeably herein and can have the same meaning, as appropriate to the context.
As used herein, data movement is an inclusive term that includes, for instance, copying, transferring, and/or transporting data values from a source location to a destination location. Data can, for example, be moved from a long digit line (e.g., storage) subarray to a short digit line (e.g., cache) subarray via an I/O line shared by respective sensing component stripes of the long and short digit line subarrays, as described herein. Copying the data values can indicate that the data values stored (cached) in a sensing component stripe are copied and moved to another subarray via the shared I/O line and that the original data values stored in the row of the subarray may remain unchanged. Transferring the data values can indicate that the data values stored (cached) in the sensing component stripe are copied and moved to another subarray via the shared I/O line and that at least one of the original data values stored in the row of the subarray may be changed (e.g., by being erased and/or by a subsequent write operation, as described herein). Transporting the data values can be used to indicate the process by which the copied and/or transferred data values are moved (e.g., by the data values being placed on the shared I/O line from the source location and transported to the destination location).
The figures herein follow a numbering convention in which the first digit or digits correspond to the figure number and the remaining digits identify an element or component in the figure. Similar elements or components between different figures may be identified by the use of similar digits. For example, 108 may reference element “08” in
In previous approaches, data may be transferred from the array and sensing circuitry (e.g., via a bus comprising input/output (I/O) lines) to a processing resource such as a processor, microprocessor, and compute engine, which may comprise ALU circuitry and other functional unit circuitry configured to perform the appropriate operations. However, transferring data from a memory array and sensing circuitry to such processing resource(s) may involve significant power consumption. Even if the processing resource is located on a same chip as the memory array, significant power can be consumed in moving data out of the array to the compute circuitry, which can involve performing a sense line (which may be referred to herein as a digit line or data line) address access (e.g., firing of a column decode signal) in order to transfer data from sense lines onto I/O lines (e.g., local and global I/O lines), moving the data to the array periphery, and providing the data to the compute function.
Furthermore, the circuitry of the processing resource(s) (e.g., a compute engine) may not conform to pitch rules associated with a memory array. For example, the cells of a memory array may have a 4F2 or 6F2 cell size, where “F” is a feature size corresponding to the cells. As such, the devices (e.g., logic gates) associated with ALU circuitry of previous PIM systems may not be capable of being formed on pitch with the memory cells, which can affect chip size and memory density, for example.
For example, the sensing circuitry 150 described herein can be formed on a same pitch as a pair of complementary sense lines. As an example, a pair of complementary memory cells may have a cell size with a 6F2 pitch (e.g., 3F×2F). If the pitch of a pair of complementary sense lines for the complementary memory cells is 3F, then the sensing circuitry being on pitch indicates the sensing circuitry (e.g., a sense amplifier and corresponding compute component per respective pair of complementary sense lines) is formed to fit within the 3F pitch of the complementary sense lines.
Furthermore, the circuitry of the processing resource(s) (e.g., a compute engine, such as an ALU) of various prior systems may not conform to pitch rules associated with a memory array. For example, the memory cells of a memory array may have a 4F2 or 6F2 cell size. As such, the devices (e.g., logic gates) associated with ALU circuitry of previous systems may not be capable of being formed on pitch with the memory cells (e.g., on a same pitch as the sense lines), which can affect chip size and/or memory density, for example. In the context of some computing systems and subsystems (e.g., a central processing unit (CPU)), data may be processed in a location that is not on pitch and/or on chip with memory (e.g., memory cells in the array), as described herein. The data may be processed by a processing resource associated with a host, for instance, rather than on pitch with the memory.
In contrast, a number of embodiments of the present disclosure can include the sensing circuitry 150 (e.g., including sense amplifiers and/or compute components) being formed on pitch with the memory cells of the array. The sensing circuitry 150 can be configured for (e.g., capable of) performing compute functions (e.g., logical operations).
PIM capable device operations can use bit vector based operations. As used herein, the term “bit vector” is intended to mean a number of bits on a bit vector memory device (e.g., a PIM device) stored in a row of an array of memory cells and/or in sensing circuitry. Thus, as used herein a “bit vector operation” is intended to mean an operation that is performed on a bit vector that is a portion of virtual address space and/or physical address space (e.g., used by a PIM device). In some embodiments, the bit vector may be a physically contiguous number of bits on the bit vector memory device stored physically contiguous in a row and/or in the sensing circuitry such that the bit vector operation is performed on a bit vector that is a contiguous portion of the virtual address space and/or physical address space. For example, a row of virtual address space in the PIM device may have a bit length of 16K bits (e.g., corresponding to 16K complementary pairs of memory cells in a DRAM configuration). Sensing circuitry 150, as described herein, for such a 16K bit row may include a corresponding 16K processing elements (e.g., compute components, as described herein) formed on pitch with the sense lines selectably coupled to corresponding memory cells in the 16 bit row. A compute component in the PIM device may operate as a one bit processing element on a single bit of the bit vector of the row of memory cells sensed by the sensing circuitry 150 (e.g., sensed by and/or stored in a sense amplifier paired with the compute component, as described herein).
A number of embodiments of the present disclosure include sensing circuitry formed on pitch with sense lines of a corresponding array of memory cells. The sensing circuitry may be capable of performing data sensing and/or compute functions (e.g., depending on whether the sensing circuitry is associated with a short digit line or a long digit line subarray) and storage of data local to the array of memory cells.
In order to appreciate the improved data movement (e.g., copying, transferring, and/or transporting) techniques described herein, a discussion of an apparatus for implementing such techniques (e.g., a memory device having PIM capabilities and an associated host) follows. According to various embodiments, program instructions (e.g., PIM commands) involving a memory device having PIM capabilities can distribute implementation of the PIM commands and data over multiple sensing circuitries that can implement operations and can move and store the PIM commands and data within the memory array (e.g., without having to transfer such back and forth over an address and control (A/C) and data bus between a host and the memory device). Thus, data for a memory device having PIM capabilities can be accessed and used in less time and using less power. For example, a time and power advantage can be realized by increasing the speed, rate, and/or efficiency of data being moved around and stored in a computing system in order to process requested memory array operations (e.g., reads, writes, logical operations, etc.).
The system 100 illustrated in
For clarity, description of the system 100 has been simplified to focus on features with particular relevance to the present disclosure. For example, in various embodiments, the memory array 130 can be a DRAM array, SRAM array, STT RAM array, PCRAM array, TRAM array, RRAM array, NAND flash array, and/or NOR flash array, for instance. The memory array 130 can include memory cells arranged in rows coupled by access lines (which may be referred to herein as word lines or select lines) and columns coupled by sense lines (which may be referred to herein as digit lines or data lines). Although a single memory array 130 is shown in
The memory device 120 can include address circuitry 142 to latch address signals provided over a data bus 156 (e.g., an I/O bus from the host 110) by I/O circuitry 144 (e.g., provided to external ALU circuitry and to DRAM data lines (DQs) via local I/O lines and global I/O lines). As used herein, DRAM DQs can enable input of data to and output of data from a bank (e.g., from and to the controller 140 and/or host 110) via a bus (e.g., data bus 156). During a write operation, voltage and/or current variations, for instance, can be applied to a DQ (e.g., a pin). These variations can be translated into an appropriate signal and stored in a selected memory cell. During a read operation, a data value read from a selected memory cell can appear at the DQ once access is complete and the output is enabled. At other times, DQs can be in state such that the DQs do not source or sink current and do not present a signal to the system. This also may reduce DQ contention when two or more devices (e.g., banks) share the data bus, as described herein.
Status and exception information can be provided from the controller 140 on the memory device 120 to a channel controller 143, for example, through a high speed interface (HSI) out-of-band bus 157, which in turn can be provided from the channel controller 143 to the host 110. The channel controller 143 can include a logic component 160 to allocate a plurality of locations (e.g., controllers for subarrays) in the arrays of each respective bank to store bank commands, application instructions (e.g., as sequences of operations), and arguments (PIM commands) for the various banks associated with operation of each of a plurality of memory devices (e.g., 120-0, 120-1, . . . , 120-N). The channel controller 143 can dispatch commands (e.g., PIM commands) to the plurality of memory devices 120-1, . . . , 120-N to store those program instructions within a given bank of a memory device.
Address signals are received through address circuitry 142 and decoded by a row decoder 146 and a column decoder 152 to access the memory array 130. Data can be sensed (read) from memory array 130 by sensing voltage and/or current changes on sense lines (digit lines) using a number of sense amplifiers, as described herein, of the sensing circuitry 150. A sense amplifier can read and latch a page (e.g., a row) of data from the memory array 130. Additional compute components, as described herein, can be coupled to the sense amplifiers and can be used in combination with the sense amplifiers to sense, store (e.g., cache and buffer), perform compute functions (e.g., operations), and/or move data. The I/O circuitry 144 can be used for bi-directional data communication with host 110 over the data bus 156 (e.g., a 64 bit wide data bus). The write circuitry 148 can be used to write data to the memory array 130. The function of the column decoder 152 circuitry, however, is distinguishable from the column select circuitry 358 described herein that is configured to implement data movement operations with respect to, for example, particular columns of a subarray and corresponding operation units in an operations stripe.
Controller 140 (e.g., bank control logic and sequencer) can decode signals (e.g., commands) provided by control bus 154 from the host 110. These signals can include chip enable signals, write enable signals, and address latch signals that can be used to control operations performed on the memory array 130, including data sense, data store, data movement, data write, and data erase operations, among other operations. In various embodiments, the controller 140 can be responsible for executing instructions from the host 110 and accessing the memory array 130. The controller 140 can be a state machine, a sequencer, or some other type of controller. The controller 140 can control shifting data (e.g., right or left) in a row of an array (e.g., memory array 130).
Examples of the sensing circuitry 150 are described further below (e.g., in
In a number of embodiments, the sensing circuitry 150 can be used to perform operations using data stored in memory array 130 as inputs and to participate in movement of the data for transfer, writing, logic, and storage operations to a different location in the memory array 130 without transferring the data via a sense line address access (e.g., without firing a column decode signal). As such, various compute functions can be performed using, and within, sensing circuitry 150 rather than (or in association with) being performed by processing resources external to the sensing circuitry 150 (e.g., by a processor associated with host 110 and other processing circuitry, such as ALU circuitry, located on device 120, such as on controller 140 or elsewhere).
In various previous approaches, data associated with an operand, for instance, would be read from memory via sensing circuitry and provided to external ALU circuitry via I/O lines (e.g., via local I/O lines and global I/O lines). The external ALU circuitry could include a number of registers and would perform compute functions using the operands, and the result would be transferred back to the array via the I/O lines.
In contrast, in a number of embodiments of the present disclosure, sensing circuitry 150 is configured to perform operations on data stored in memory array 130 and store the result back to the memory array 130 without enabling a local I/O line and global I/O line coupled to the sensing circuitry 150. The sensing circuitry 150 can be formed on pitch with sense lines for the memory cells of the array. Additional peripheral sense amplifiers and/or logic 170 (e.g., subarray controllers that each execute instructions for performing a respective operation) can be coupled to the sensing circuitry 150. The sensing circuitry 150 and the peripheral sense amplifier and logic 170 can cooperate in performing operations, according to some embodiments described herein.
As such, in a number of embodiments, circuitry external to memory array 130 and sensing circuitry 150 is not needed to perform compute functions, as the sensing circuitry 150 can perform the appropriate operations in order to perform such compute functions in a sequence of instructions without the use of an external processing resource. Therefore, the sensing circuitry 150 may be used to complement or to replace, at least to some extent, such an external processing resource (or at least reduce the bandwidth consumption of transfer of data to and/or from such an external processing resource).
In a number of embodiments, the sensing circuitry 150 may be used to perform operations (e.g., to execute a sequence of instructions) in addition to operations performed by an external processing resource (e.g., host 110). For instance, either of the host 110 and the sensing circuitry 150 may be limited to performing only certain operations and/or a certain number of operations.
Enabling a local I/O line and global I/O line can include enabling (e.g., turning on, activating) a transistor having a gate coupled to a decode signal (e.g., a column decode signal) and a source/drain coupled to the I/O line. However, embodiments are not limited to not enabling a local I/O line and global I/O line. For instance, in a number of embodiments, the sensing circuitry 150 can be used to perform operations without enabling column decode lines of the array. However, the local I/O line(s) and global I/O line(s) may be enabled in order to transfer a result to a suitable location other than back to the memory array 130 (e.g., to an external register).
The short and long digit line subarrays are respectively separated by amplification regions configured to be coupled to a data path (e.g., the shared I/O line described herein). As such, the short digit line subarrays 125-0 and 125-1 and the long digit line subarrays 126-0, . . . , 126-N−1 can each have amplification regions 124-0, 124-1, . . . , 124-N−1 that correspond to sensing component stripe 0, sensing component stripe 1, . . . , and sensing component stripe N−1, respectively.
Each column 122 can be configured to be coupled to sensing circuitry 150, as described in connection with
In some embodiments, a compute component can be coupled to each sense amplifier within the sensing circuitry 150 in each respective sensing component stripe coupled to a short digit line subarray (e.g., in sensing component stripes 124-0 and 124-1 coupled respectively to the short digit line subarrays 125-0 and 125-1). However, embodiments are not so limited. For example, in some embodiments, there may not be a 1:1 correlation between the number of sense amplifiers and compute components (e.g., there may be more than one sense amplifier per compute component or more than one compute component per sense amplifier, which may vary between subarrays, partitions, banks, etc.).
Each of the of the short digit line subarrays 125-0 and 125-1 can include a plurality of rows 119 shown vertically as Y (e.g., each subarray may include 512 rows in an example DRAM bank). Each of the of the long digit line subarrays 126-0, . . . , 126-N−1 can include a plurality of rows 118 shown vertically as Z (e.g., each subarray may include 1024 rows in an example DRAM bank). Example embodiments are not limited to the example horizontal and vertical orientation of columns and/or numbers of rows described herein.
Implementations of PIM DRAM architecture may perform processing at the sense amplifier and compute component level (e.g., in a sensing component stripe). Implementations of PIM DRAM architecture may allow a finite number of memory cells to be connected to each sense amplifier (e.g., around 1K or 1024 memory cells). A sensing component stripe may include from around 8K to around 16K sense amplifiers. For example, a sensing component stripe for a long digit line subarray may include 16K sense amplifiers and may be configured to couple to an array of 1K rows and around 16K columns with a memory cell at each intersection of the rows and columns so as to yield 1K (1024) memory cells per column. By comparison, a sensing component stripe for a short digit line subarray may include 16K sense amplifiers and compute components and may be configured to couple to an array of, for example, at most half of the 1K rows of the long digit line subarray so as to yield 512 memory cells per column. In some embodiments, the number of sense amplifiers and/or compute components in respective sensing component stripes (e.g., corresponding to a number of memory cells in a row) can vary between at least some of the short digit line subarrays in comparison to the long digit line subarrays.
The numbers of rows, columns, and memory cells per column and/or the ratio of the numbers of memory cells between columns in the long and short digit line subarrays just presented are provided by way of example and not by way of limitation. For example, the long digit line subarrays may have columns that each have a respective 1024 memory cells and the short digit line subarrays may have columns that each have either a respective 512, 256, or 128 memory cells, among other possible numbers that are less than 512. The long digit line subarrays may, in various embodiments, have less than or more than 1024 memory cells per column, with the number of memory cells per column in the short digit line subarrays configured as just described. Alternatively or in addition, cache subarrays may be formed with a digit line length less than, equal to, or greater than the digit line length of the long digit line subarrays (storage subarrays) such that the cache subarrays are not the short digit line subarrays just described. For example, the configuration of the digit lines and/or the memory cells of the cache subarrays may provide faster computation than the configuration of the storage subarrays (e.g., 2T2C instead of 1T1C, SRAM instead of DRAM, etc.). Accordingly, the number of rows of memory cells in a cache subarray and/or the corresponding number of memory cells per digit line may be less than, equal to, or greater than the number of rows of memory cells in a storage subarray and/or the corresponding number of memory cells per digit line of the storage subarrays.
An isolation stripe (e.g., isolation stripe 172) can be associated with a partition 128 of a plurality of subarrays. For example, isolation stripe 0 (172) is shown by way of example to be adjacent sensing component stripe 124-N−1, which is coupled to long digit line subarray 126-N−1. In some embodiments, long digit line subarray 126-N−1 may be subarray 32 in 128 subarrays and may be a last subarray in a first direction in a first partition of four partitions of subarrays, as described herein. As described further in connection with
As such, the plurality of subarrays 125-0 and 125-1 and 126-0, . . . , 126-N−1, the plurality of sensing component stripes 124-0, 124-1, . . . , 124-N−1, and the isolation stripe 172 may be considered as a single partition 128. In some embodiments, however, depending upon the direction of the data movement, a single isolation stripe can be shared by two adjacent partitions.
As shown in
As shown in
The plurality of subarrays shown at 125-0, 125-1, and 125-3 for short digit line subarrays and 126-0, 126-1, . . . , 126-N−1 for long digit line subarrays can each be coupled to and/or separated by sensing component stripes 124-0, 124-1, . . . , 124-N−1 that can include sensing circuitry 150 and logic circuitry 170. As noted, the sensing component stripes 124-0, 124-1, . . . , 124-N−1 each include sensing circuitry 150, having at least sense amplifiers configured to couple to each column of memory cells in each subarray, as shown in
As shown schematically in
As described herein, an I/O line can be selectably shared by a plurality of partitions, subarrays, rows, and/or particular columns of memory cells via the sensing component stripe coupled to each of the subarrays. For example, the sense amplifier and/or compute component of each of a selectable subset of a number of columns (e.g., eight column subsets of a total number of columns) can be selectably coupled to each of the plurality of shared I/O lines for data values stored (cached) in the sensing component stripe to be moved (e.g., transferred, transported, and/or fed) to each of the plurality of shared I/O lines. Because the singular forms “a”, “an”, and “the” can include both singular and plural referents herein, “a shared I/O line” can be used to refer to “a plurality of shared I/O lines”, unless the context clearly dictates otherwise. Moreover, “shared I/O lines” is an abbreviation of “plurality of shared I/O lines”.
In some embodiments, the controller 140 and/or the cache controller 171 may be configured to direct (e.g., provide instructions (commands)) and data to a plurality of locations of a particular bank 121 in the memory array 130 and to the sensing component stripes 124-0, 124-1, . . . , 124-N−1 via the shared I/O line 155 coupled to control and data registers 151. For example, the control and data registers 151 can relay the instructions to be executed by the sense amplifiers and/or the compute components of the sensing circuity 150 in the sensing component stripes 124-0, 124-1, . . . , 124-N−1.
As described in connection with
Embodiments, however, are not so limited. For example, in various embodiments, there can be any number of short digit line subarrays 125 and any number of long digit line subarrays 126 in the bank section 123, which can be separated by isolation stripes into any number of partitions (e.g., as long as there is a combination of at least one short digit line subarray with at least one long digit line subarray in the various partitions). In various embodiments, the partitions can each include a same number or a different number of short and/or long digit line subarrays, sensing component stripes, etc., depending on the implementation.
A memory cell can include a storage element (e.g., capacitor) and an access device (e.g., transistor). For instance, a first memory cell can include transistor 202-1 and capacitor 203-1, and a second memory cell can include transistor 202-2 and capacitor 203-2, etc. In this embodiment, the memory array 230 is a DRAM array of 1T1C (one transistor one capacitor) memory cells, although other embodiments of configurations can be used (e.g., 2T2C with two transistors and two capacitors per memory cell). In a number of embodiments, the memory cells may be destructive read memory cells (e.g., reading the data stored in the cell destroys the data such that the data originally stored in the cell may be refreshed after being read).
The cells of the memory array 230 can be arranged in rows coupled by access (word) lines 204-X (Row X), 204-Y (Row Y), etc., and columns coupled by pairs of complementary sense lines (e.g., digit lines DIGIT(D) and DIGIT(D)_ shown in
Although rows and columns are illustrated as orthogonally oriented in a plane, embodiments are not so limited. For example, the rows and columns may be oriented relative to each other in any feasible three-dimensional configuration. The rows and columns may be oriented at any angle relative to each other, may be oriented in a substantially horizontal plane or a substantially vertical plane, and/or may be oriented in a folded topology, among other possible three-dimensional configurations.
Memory cells can be coupled to different digit lines and word lines. For example, a first source/drain region of a transistor 202-1 can be coupled to digit line 205-1 (D), a second source/drain region of transistor 202-1 can be coupled to capacitor 203-1, and a gate of a transistor 202-1 can be coupled to word line 204-Y. A first source/drain region of a transistor 202-2 can be coupled to digit line 205-2 (D)_, a second source/drain region of transistor 202-2 can be coupled to capacitor 203-2, and a gate of a transistor 202-2 can be coupled to word line 204-X. A cell plate, as shown in
The memory array 230 is configured to couple to sensing circuitry 250 in accordance with a number of embodiments of the present disclosure. In this embodiment, the sensing circuitry 250 comprises a sense amplifier 206 and a compute component 231 corresponding to respective columns of memory cells (e.g., coupled to respective pairs of complementary digit lines in a short digit line subarray). The sense amplifier 206 can be coupled to the pair of complementary digit lines 205-1 and 205-2. The compute component 231 can be coupled to the sense amplifier 206 via pass gates 207-1 and 207-2. The gates of the pass gates 207-1 and 207-2 can be coupled to operation selection logic 213.
The operation selection logic 213 can be configured to include pass gate logic for controlling pass gates that couple the pair of complementary digit lines un-transposed between the sense amplifier 206 and the compute component 231 and swap gate logic for controlling swap gates that couple the pair of complementary digit lines transposed between the sense amplifier 206 and the compute component 231. The operation selection logic 213 can also be coupled to the pair of complementary digit lines 205-1 and 205-2. The operation selection logic 213 can be configured to control continuity of pass gates 207-1 and 207-2 based on a selected operation.
The sense amplifier 206 can be operated to determine a data value (e.g., logic state) stored in a selected memory cell. The sense amplifier 206 can comprise a cross coupled latch, which can be referred to herein as a primary latch. In the example illustrated in
In operation, when a memory cell is being sensed (e.g., read), the voltage on one of the digit lines 205-1 (D) or 205-2 (D)_ will be slightly greater than the voltage on the other one of digit lines 205-1 (D) or 205-2 (D)_. An ACT signal and an RNL* signal, for example, can be driven low to enable (e.g., fire) the sense amplifier 206. The digit lines 205-1 (D) or 205-2 (D)_ having the lower voltage will turn on one of the PMOS transistor 229-1 or 229-2 to a greater extent than the other of PMOS transistor 229-1 or 229-2, thereby driving high the digit line 205-1 (D) or 205-2 (D)_ having the higher voltage to a greater extent than the other digit line 205-1 (D) or 205-2 (D)_ is driven high.
Similarly, the digit line 205-1 (D) or 205-2 (D)_ having the higher voltage will turn on one of the NMOS transistor 227-1 or 227-2 to a greater extent than the other of the NMOS transistor 227-1 or 227-2, thereby driving low the digit line 205-1 (D) or 205-2 (D) having the lower voltage to a greater extent than the other digit line 205-1 (D) or 205-2 (D)_ is driven low. As a result, after a short delay, the digit line 205-1 (D) or 205-2 (D)_ having the slightly greater voltage is driven to the voltage of the supply voltage Vcc through a source transistor, and the other digit line 205-1 (D) or 205-2 (D)_ is driven to the voltage of the reference voltage (e.g., ground) through a sink transistor. Therefore, the cross coupled NMOS transistors 227-1 and 227-2 and PMOS transistors 229-1 and 229-2 serve as a sense amplifier pair, which amplify the differential voltage on the digit lines 205-1 (D) and 205-2 (D)_ and operate to latch a data value sensed from the selected memory cell. As used herein, the cross coupled latch of sense amplifier 206 may be referred to as the primary latch 215.
Embodiments are not limited to the sense amplifier 206 configuration illustrated in
The sense amplifier 206 can, in conjunction with the compute component 231, be operated to perform various operations using data from an array as input. In a number of embodiments, the result of an operation can be stored back to the array without transferring the data via a digit line address access (e.g., without firing a column decode signal such that data is transferred to circuitry external from the array and sensing circuitry via local I/O lines). As such, a number of embodiments of the present disclosure can enable performing operations and compute functions associated therewith using less power than various previous approaches. Additionally, since a number of embodiments reduce or eliminate transferring data across local and global I/O lines in order to perform the operations and associated compute functions (e.g., transferring data between memory and a discrete processor), a number of embodiments can enable an increased (e.g., faster) processing capability as compared to previous approaches.
The sense amplifier 206 can further include equilibration circuitry 214, which can be configured to equilibrate the digit lines 205-1 (D) and 205-2 (D)_. In this example, the equilibration circuitry 214 comprises a transistor 224 coupled between digit lines 205-1 (D) and 205-2 (D)_. The equilibration circuitry 214 also comprises transistors 225-1 and 225-2 each having a first source/drain region coupled to an equilibration voltage (e.g., VDD/2), where VDD is a supply voltage associated with the array. A second source/drain region of transistor 225-1 can be coupled to digit line 205-1 (D), and a second source/drain region of transistor 225-2 can be coupled to digit line 205-2 (D)_. Gates of transistors 224, 225-1, and 225-2 can be coupled together, and to an equilibration (EQ) control signal line 234. As such, activating EQ enables the transistors 224, 225-1, and 225-2, which effectively shorts digit lines 205-1 (D) and 205-2 (D) together and to the equilibration voltage (e.g., VCC/2).
Although
As described further below, in a number of embodiments, the sensing circuitry 250 (e.g., sense amplifier 206 and compute component 231) can be operated to perform a selected operation and initially store the result in one of the sense amplifier 206 or the compute component 231 without transferring data from the sensing circuitry via a local or global I/O line (e.g., without performing a sense line address access via activation of a column decode signal, for instance).
Performance of various types of operations can be implemented. For example, Boolean operations (e.g., Boolean logical functions involving data values) are used in many higher level applications. Consequently, speed and power efficiencies that can be realized with improved performance of the operations may provide improved speed and/or power efficiencies for these applications.
As shown in
In various embodiments, connection circuitry 232-1 can, for example, be coupled at 217-1 and connection circuitry 232-2 can be coupled at 217-1 to the primary latch 215 for movement of sensed and/or stored data values. The sensed and/or stored data values can be moved to a selected memory cell in a particular row and/or column of another subarray via a shared I/O line, as described herein, and/or directly to the selected memory cell in the particular row and/or column of the other subarray via connection circuitry 232-1 and 232-2. Although
In various embodiments, connection circuitry (e.g., 232-1 and 232-2) can be configured to connect sensing circuitry coupled to a particular column in a first subarray to a number of rows in a corresponding column in a second subarray (e.g., which can be an adjacent subarray and/or separated by a number of other subarrays). As such, the connection circuitry can be configured to move (e.g., copy, transfer, and/or transport) a data value (e.g., from a selected row and the particular column) to a selected row and the corresponding column in the second subarray (e.g., the data value can be copied to a selected memory cell therein) for performance of an operation in a short digit line subarray and/or for storage of the data value in a long digit line subarray. In some embodiments, the movement of the data value can be directed by the cache controller 171 and/or controller 140 executing a set of instructions to store the data value in the sensing circuitry 250 (e.g., the sense amplifier 206 and/or the coupled compute component 231) and the cache controller 171 can select a particular row and/or a particular memory cell intersected by the corresponding column in the second subarray to receive the data value by movement (e.g., copying, transferring, and/or transporting) of the data value.
Data values present on the pair of complementary digit lines 305-1 and 305-2 can be loaded into the compute component 331-0 as described in connection with
The sense amplifiers 306-0, 306-1, . . . , 306-7 in
The configurations of embodiments illustrated in
The circuitry illustrated in
Cache controller 171 and/or controller 140 can be coupled to column select circuitry 358 to control select lines (e.g., select line 0) to access data values stored in the sense amplifiers, compute components and/or present on the pair of complementary digit lines (e.g., 305-1 and 305-2 when selection transistors 359-1 and 359-2 are activated via signals from select line 0). Activating the selection transistors 359-1 and 359-2 (e.g., as directed by the controller 140 and/or cache controller 171) enables coupling of sense amplifier 306-0, compute component 331-0, and/or complementary digit lines 305-1 and 305-2 of column 0 (322-0) to move data values on digit line 0 and digit line 0* to shared I/O line 355. For example, the moved data values may be data values from a particular row 319 stored (cached) in sense amplifier 306-0 and/or compute component 331-0 of the sensing component stripe for a short digit line subarray. Data values from each of columns 0 through 7 can similarly be selected by cache controller 171 and/or controller 140 activating the appropriate selection transistors.
Moreover, enabling (e.g., activating) the selection transistors (e.g., selection transistors 359-1 and 359-2) can enable a particular sense amplifier and/or compute component (e.g., 306-0 and/or 331-0, respectively) to be coupled with a shared I/O line 355 such that data values stored by an amplifier and/or compute component can be moved to (e.g., placed on, transferred, and/or transported to) the shared I/O line 355. In some embodiments, one column at a time is selected (e.g., column 322-0) to be coupled to a particular shared I/O line 355 to move (e.g., copy, transfer, and/or transport) the stored data values. In the example configuration of
As described herein, a memory device (e.g., 120 in
The bank 121 can include a plurality of partitions (e.g., 128-0, 128-1, . . . , 128-M−1 in
In various embodiments, the sensing circuitry (e.g., 150 in
In some embodiments, the plurality of short digit line subarrays 125 can each be configured to include a same number of a plurality of rows (e.g., 119 in
The memory device 120 can include a shared I/O line (e.g., 155 in
As described herein, the array of memory cells can include an implementation of DRAM memory cells where the cache controller 171 is configured, in response to a command, to move data from the source location to the destination location via a shared I/O line. The source location can be in a first bank and the destination location can be in a second bank in the memory device and/or the source location can be in a first subarray of one bank in the memory device and the destination location can be in a second subarray of the same bank. The first subarray and the second subarray can be in the same partition of the bank or the subarrays can be in different partitions of the bank.
As described herein, a memory device 120 can include a plurality of subarrays of memory cells, where the plurality of subarrays includes a first subset (e.g., short digit line subarrays 125 in
The memory device 120 also can include a cache controller (e.g., 171 in
In some embodiments, the sensing circuitry 150 can be coupled to a first subarray 125 in the first subset via a column 122 of the memory cells, the sensing circuitry including the sense amplifier 206 and the compute component 231 coupled to the column. A number of memory cells in a column of the first subarray 125 in the first subset may, in some embodiments, be at most half of a number of memory cells in a column of a first subarray 126 in the second subset. Alternatively or in addition, a first physical length of a sense line (e.g., of a pair of complementary sense lines) of the first subarray 125 in the first subset may, in some embodiments, be at most half of a second physical length of a sense of a first subarray 126 in the second subset. Alternatively or in addition, a first physical length of a column of the first subarray 125 in the first subset may, in some embodiments, be at most half of a second physical length of a column of a first subarray 126 in the second subset. The comparative numbers of memory cells in and/or physical lengths of the columns of the short digit line subarrays versus the long digit line subarrays are represented by the span of the respective rows 119 and 118 in
The memory device 120 can include sensing circuitry 150 coupled to the second subset of the subarrays (e.g., the long digit line subarrays 126). In some embodiments, the sensing circuitry coupled to the second subset may include a sense amplifier but no compute component. Although sensing circuitry for the second subset may, in some embodiments, include both the sense amplifier and compute component, to distinguish the embodiments in which the compute component is not included, that embodiment is termed the second sensing circuitry for the second subset and the sensing circuitry for the first subset, which includes the compute component, is termed the first sensing circuitry. As such, the second subset of subarrays may be used to store a data value on which an operation may be performed by the first sensing circuitry as a sensed data value in the second sensing circuitry prior to the first movement of the data value to the first sensing circuitry of the first subset of subarrays.
The first sensing circuitry and the second sensing circuitry of the memory device can be formed on pitch with sense lines of the respective first and second subsets of the plurality of subarrays (e.g., as shown in
The second subset of the subarrays (e.g., the memory cells of the long digit line subarrays 126) may be used to store the data value on which the operation may be performed by the first sensing circuitry prior to the first movement of the data value to the first subset of the subarrays. In addition, the second subset of the subarrays (e.g., the same or different memory cells of the same or different long digit line subarrays 126) may be used to store the data value on which the operation has been performed by the first sensing circuitry subsequent to the second movement of the data value.
The cache controller 171 described herein can be configured to direct the first movement of the data value from a selected row in a first subarray in the second subset to a selected row in a first subarray in the first subset and a second movement of the data value on which the operation has been performed from the selected row in the first subarray in the first subset to the selected row in the first subarray in the second subset. For example, in some embodiments, the data value can be moved from a selected row (or a selected memory cell) of the second subarray to a selected row (or a selected memory cell) of the first subarray, an operation can be performed on the data value by the sensing circuitry of the first subarray, and then the changed data value can be moved back to the same selected row (or the same selected memory cell) of the first subarray of the second subset after the operation has been performed thereon.
The memory device 120 can include a controller (e.g., 140 in
The memory device 120 can, in some embodiments, include connection circuitry configured to connect sensing circuitry (e.g., as shown at 232-1 and 232-2 and described in connection with
Movement of a data value (e.g., via a shared I/O line and/or connection circuitry) can be directed by the cache controller 171 executing a set of instructions for movement of the data value from the first subarray in the second subset (e.g., the long digit line subarrays 126) to the selected row, or rows, and the corresponding column in the first subarray in the first subset. The selected row, or rows, and the corresponding column in the first subarray in the first subset can be configured to receive (e.g., cache) the data value. The cache controller 171 can then direct the performance of the operation on the data value in the sensing circuitry of the first subarray in the first subset.
The cache controller 171 can be further configured to direct movement (e.g., via the shared I/O line and/or the connection circuitry) of the data value on which the operation has been performed from the selected row, or rows, and the corresponding column in the first subarray in the first subset (e.g., the short digit line subarrays 125) to a number of rows in the corresponding column in the first subarray in the second subset (e.g., the long digit line subarrays 126). In various embodiments, the rows, columns, and/or subarrays to which the data values are moved after the operation(s) has been performed thereon may differ from the rows, columns, and/or subarrays from which the data values were sent from the long digit line subarray to the short digit line subarray. For example, the data values may be moved to different rows, columns, and/or subarrays in one or more long digit line subarrays and/or to different rows, columns, and/or subarrays in one or more short digit line subarrays.
In some embodiments, when, for example, a controller executing a PIM command in a short digit line (e.g., cache) subarray attempts to access a row that is not cached in that short digit line subarray, the cache controller may move (e.g., copy, transfer, and/or transport) the data from the appropriate long digit line (e.g., storage) subarray into a number of rows of the cache subarray. When no rows are free and/or available for movement of the data values into the cache subarray, a row or rows of data values may be at least temporarily moved from (e.g., saved in another location) the cache subarray before loading (e.g., writing) the moved row or rows of data values. This may also involve moving (e.g., copying, transferring, and/or transporting) the data values from the short digit line (e.g., cache) subarray into a long digit line (e.g., storage) subarray. In some embodiments, a data value may be directly retrieved from a long digit line subarray (e.g., when no operation is to be performed on the data value beforehand). Alternatively or in addition, a memory request to a row cached in the short digit line subarray may trigger a writeback (e.g., after an operation has been performed) to the long digit line subarray, from which the data value may subsequently be retrieved.
Attempted host, controller, and/or other accesses to data values stored in rows of long digit line subarray that have already been moved to (e.g., cached in) the short digit line subarrays may be redirected to use the version cached in the short digit line subarray (e.g., for consistency, efficiency, speed, etc.). A particular short digit line (e.g., cache) subarray also may be associated with one or more (e.g., a set of) of long digit line (e.g., storage) subarrays. For example, a same row from a storage subarray might be cached in a corresponding same row of a cache subarray across several corresponding groups (e.g., partitions) of partitioned subarrays. This may reduce complexity for the cache controller in determining source and destination locations for the data movements and/or may allow parallel data movement to be performed between the long digit line and short digit line subarrays in one or more of the partitions, as described herein.
In various embodiments, the memory device 120 can include isolation circuitry (e.g., isolation stripes 172 in
The isolation stripe 372 can, in some embodiments, include a first isolation transistor 332 coupled to the first portion of the shared I/O line (e.g., corresponding to partition 128-0) to selectably control data movement from the first partition to the second partition and a second isolation transistor 333 coupled to the second portion of the shared I/O line (e.g., corresponding to partition 128-1) to selectably control data movement from the second partition to the first partition. As shown in
In some embodiments, as shown in
For example, portion 462-1 of subarray 0 (425-0) in
As illustrated in
As described in connection with
As described in connection with
The column select circuitry (e.g., 358 in
As such, with 2048 portions of subarrays each having eight columns (e.g., subarray portion 462-1 of each of subarrays 425-0, . . , 426-N−1), and each configured to couple to a different shared I/O line (e.g., 455-1 through 455-M), 2048 data values (e.g., bits) could be moved to the plurality of shared I/O lines at substantially the same point in time (e.g., in parallel). Accordingly, the plurality of shared I/O lines might be, for example, at least a thousand bits wide (e.g., 2048 bits wide), such as to increase the speed, rate, and/or efficiency of data movement in a DRAM implementation (e.g., relative to a 64 bit wide data path).
As illustrated in
As described herein, a cache controller 171 can be coupled to a bank of a memory device (e.g., 121) to execute a command to move data in the bank from a source location (e.g., long digit line subarray 426-N−1) to a destination location (e.g., short digit line subarray 425-0) and vice versa (e.g., subsequent to performance of an operation thereon). A bank section can, in various embodiments, include a plurality of subarrays of memory cells in the bank section (e.g., subarrays 125-0 through 126-N−1 and 425-0 through 426-N−1). The bank section can, in various embodiments, further include sensing circuitry (e.g., 150) coupled to the plurality of subarrays via a plurality of columns (e.g., 322-0, 422-0, and 422-1) of the memory cells. The sensing circuitry can include a sense amplifier and/or a compute component (e.g., 206 and 231, respectively, in
The bank section can, in various embodiments, further include a shared I/O line (e.g., 155, 355, 455-1, and 455-M) to couple the source location and the destination location to move the data. In addition, the cache controller 171 and/or the controller 140 can be configured to direct the plurality of subarrays and the sensing circuitry to perform a data write operation on the moved data to the destination location in the bank section (e.g., a selected memory cell in a particular row and/or column of a different selected subarray).
In various embodiments, the apparatus can include a sensing component stripe (e.g., 124 and 424) including a number of sense amplifiers and/or compute components that corresponds to a number of columns of the memory cells (e.g., where each column of memory cells is configured to couple to a sense amplifier and/or a compute component). The number of sensing component stripes in the bank section (e.g., 424-0 through 424-N−1) can correspond to a number of subarrays in the bank section (e.g., 425-0 through 426-N−1).
The number of sense amplifiers and/or compute components can be selectably (e.g., sequentially) coupled to the shared I/O line (e.g., as shown by column select circuitry at 358-1, 358-2, 359-1, and 359-2 in
A source sensing component stripe (e.g., 124 and 424) can include a number of sense amplifiers and/or compute components that can be selected and configured to move (e.g., copy, transfer, and/or transport) data values (e.g., a number of bits) sensed from a row of the source location in parallel to a plurality of shared I/O lines. For example, in response to commands for sequential sensing through the column select circuitry, the data values stored in memory cells of selected columns of a row of the subarray can be sensed by and stored (cached) in the sense amplifiers and/or compute components of the sensing component stripe until a number of data values (e.g., the number of bits) reaches the number of data values stored in the row and/or a threshold (e.g., the number of sense amplifiers and/or compute components in the sensing component stripe) and then move (e.g., copy, transfer, and/or transport) the data values via the plurality of shared I/O lines. In some embodiments, the threshold amount of data can correspond to the at least a thousand bit width of the plurality of shared I/O lines.
The cache controller 171 can, as described herein, be configured to move the data values from a selected row and a selected column in the source location to a selected row and a selected column in the destination location via the shared I/O line. In various embodiments, the data values can be moved in response to commands by the cache controller 171 coupled to a particular subarray 425-0, . . . , 426-N−1 and/or a particular sensing component stripe 424-0, . . . , 424-N−1 of the respective subarray. The data values in rows of a source (e.g., first) subarray may be moved sequentially to respective rows of a destination (e.g., second) subarray. In various embodiments, each subarray may include 128, 256, 512, 1024 rows, among other numbers of rows, depending upon whether a particular subarray is a short digit line subarray or a long digit line subarray. For example, the data values may, in some embodiments, be moved from a first row of the source subarray to a respective first row of the destination subarray, then moved from a second row of the source subarray to a respective second row of the destination subarray, followed by movement from a third row of the source subarray to a respective third row of the destination subarray, and so on until reaching, for example, a last row of the source subarray or a last row of the destination subarray. As described herein, the respective subarrays can be in the same partition or in different partitions.
In various embodiments, a selected row and a selected column in the source location (e.g., a first subarray) input to the cache controller 171 can be different from a selected row and a selected row and a selected column in the destination location (e.g., a second subarray). As such, a location of the data in memory cells of the selected row and the selected column in the source subarray can be different from a location of the data moved to memory cells of the selected row and the selected column in the destination subarray. For example, the source location may be a particular row and digit lines of portion 462-1 of long digit line subarray 426-N−1 in
As described herein, a destination sensing component stripe (e.g., 124 and 424) can be the same as a source sensing component stripe. For example, a plurality of sense amplifiers and/or compute components can be selected and configured (e.g., depending on the command from the controller 140 and/or directions from the cache controller 171) to selectably move (e.g., copy, transfer, and/or transport) sensed data to the coupled shared I/O line and selectably receive the data from one of a plurality of coupled shared I/O lines (e.g., to be moved to the destination location). Selection of sense amplifiers and/or compute components in the destination sensing component stripe can be performed using the column select circuitry (e.g., 358-1, 358-2, 359-1, and 359-2 in
The controller 140 and/or the cache controller 171 can, according to some embodiments, be configured to write an amount of data (e.g., a number of data bits) selectably received by the plurality of selected sense amplifiers and/or compute components in the destination sensing component stripe to a selected row and columns of the destination location in the destination subarray. In some embodiments, the amount of data to write corresponds to the at least a thousand bit width of a plurality of shared I/O lines.
The destination sensing component stripe can, according to some embodiments, include a plurality of selected sense amplifiers and/or compute components configured to store received data values (e.g., bits) when an amount of received data values (e.g., the number of data bits) exceeds the at least a thousand bit width of the plurality of shared I/O lines. The controller 140 and/or cache controller 171 can, in various embodiments, be configured to write the stored data values (e.g., the number of data bits) to a selected row and columns in the destination location as a plurality of subsets. In some embodiments, the amount of data values of at least a first subset of the written data can correspond to the at least a thousand bit width of the plurality of shared I/O lines. According to some embodiments, the controller 140 and/or the cache controller 171 can be configured to write the stored data values (e.g., the number of data bits) to the selected row and columns in the destination location as a single set (e.g., not as subsets of data values).
As described herein, the controller 140 and/or the cache controller 171 can be coupled to a bank (e.g., 121) of a memory device (e.g., 120) to execute a command for parallel partitioned data movement in the bank. A bank in the memory device can include a plurality of partitions (e.g., 128-0, 128-1, . . . , 128-M−1 in
The bank can include sensing circuitry (e.g., 150 in
The bank also can include a plurality of shared I/O lines (e.g., 355 in
The controller 140 and/or the cache controller 171 can be configured to selectably direct the isolation circuitry to disconnect portions of the plurality of shared I/O lines corresponding to the first and second partitions. Disconnecting the portions may, for example, allow a first data movement (e.g., from a first subarray to a second subarray in a first partition) to be isolated from a parallel second data movement (e.g., from a first subarray to a second subarray in a second partition). The controller 140 and/or the cache controller 171 also can be configured to selectably direct the isolation circuitry to connect portions of the plurality of shared I/O lines corresponding to the first and second partitions. Connecting the portions may, for example, enable data movement from a subarray in the first partition to a subarray in the second partition.
The controller 140 and/or cache controller 171 can be configured to selectably direct the isolation circuitry to connect portions of the plurality of shared I/O lines corresponding to a third partition (not shown) and a fourth partition (e.g., partition 128-M−1 in
A row can be selected (e.g., opened by the controller 140 and/or the cache controller 171 via an appropriate select line) for the first sensing component stripe and the data values of the memory cells in the row can be sensed. After sensing, the first sensing component stripe can be coupled to the shared I/O line, along with coupling the second sensing component stripe to the same shared I/O line. The second sensing component stripe can still be in a pre-charge state (e.g., ready to accept data). After the data from the first sensing component stripe has been moved (e.g., driven) into the second sensing component stripe, the second sensing component stripe can fire (e.g., latch) to store the data into respective sense amplifiers and/or compute components. A row coupled to the second sensing component stripe can be opened (e.g., after latching the data) and the data that resides in the sense amplifiers and/or compute components can be written into the destination location of that row.
In some embodiments, 2048 shared I/O lines can be configured as a 2048 bit wide shared I/O line. According to some embodiments, a number of cycles for moving the data from a first row in the source location to a second row in the destination location can be determined by dividing a number of columns in the array intersected by a row of memory cells in the array by the 2048 bit width of the plurality of shared I/O lines. For example, an array (e.g., a bank, a bank section, or a subarray thereof) can have 16,384 columns, which can correspond to 16,384 data values in a row, which when divided by the 2048 bit width of the plurality of shared I/O lines intersecting the row can yield eight cycles, each separate cycle being at substantially the same point in time (e.g., in parallel) for movement of each 2048 bit fraction of the data in the row such that all 16,384 data bits in the row are moved after completion of the eight cycles. For example, only one of a plurality (e.g., a subset of eight, as shown in
Alternatively or in addition, a bandwidth for moving the data from a first row in the source location to a second row in the destination location can be determined by dividing the number of columns in the array intersected by the row of memory cells in the array by the 2048 bit width of the plurality of shared I/O lines and multiplying the result by a clock rate of the controller. In some embodiments, determining a number of data values in a row of the array can be based upon the plurality of sense (digit) lines in the array.
In some embodiments, the source location in the first subarray and the destination location in the second subarray can be in a single bank section of a memory device (e.g., as shown in
In various embodiments, the controller 140 and/or the cache controller 171 can select (e.g., open via an appropriate select line) a first row of memory cells, which corresponds to the source location, for the first sensing component stripe to sense data stored therein, couple the plurality of shared I/O lines to the first sensing component stripe, and couple the second sensing component stripe to the plurality of shared I/O lines (e.g., via the column select circuitry 358-1, 358-2, 359-1, and 359-2 and/or the multiplexers 460-1 and 460-2). As such, the data values can be moved in parallel from the first sensing component stripe to the second sensing component stripe via the plurality of shared I/O lines. The first sensing component stripe can store (e.g., cache) the sensed data and the second sensing component stripe can store (e.g., cache) the moved data.
The controller 140 and/or the cache controller 171 can select (e.g., open via an appropriate select line) a second row of memory cells, which corresponds to the destination location, for the second sensing component stripe (e.g., via the column select circuitry 358-1, 358-2, 359-1, and 359-2 and/or the multiplexers 460-1 and 460-2). The controller 140 and/or the cache controller 171 can then direct writing the data moved to the second sensing component stripe to the destination location in the second row of memory cells.
The shared I/O line can be shared between some or all sensing component stripes. In various embodiments, one sensing component stripe or one pair of sensing component stripes (e.g., coupling a source location and a destination location) can communicate with the shared I/O line at any given time. As described herein, a source row of a source subarray (e.g., any one of 512 rows) can be different from (e.g., need not match) a destination row of a destination subarray, where the source and destination subarrays can, in various embodiments, be in the same or different banks and bank sections of memory cells. Moreover, a selected source column (e.g., any one of eight configured to be coupled to a particular shared I/O line) can be different from (e.g., need not match) a selected destination column of a destination subarray.
As described herein, an I/O line 455 can be shared by the second subset (e.g., the long digit line subarrays 426) and the sensing circuitry 424 of the first subset (e.g., the short digit line subarrays 425). The shared I/O line can be configured to selectably couple to the sensing circuitry of the first subset to enable movement of a data value stored in selected memory cells in a selected row of the second subset to the sensing circuitry of a selected subarray in the first subset.
The cache controller 171 can be configured to direct performance of an operation on the data value in the sensing circuitry of the selected subarray in the first subset. The cache controller can, in some embodiments, be configured to direct movement of the data value from the sensing circuitry 450 of the selected subarray 425 in the first subset to a selected memory cell in a selected row of the selected subarray prior to performance of the operation thereon by the sensing circuitry. For example, the data value may be moved from the sensing circuitry 450 to be saved in a memory cell in the short digit line subarray 425 before the operation has been performed on the data value. The cache controller can, in some embodiments, be configured to direct movement of the data value from the sensing circuitry 450 of the selected subarray 425 in the first subset to a selected memory cell in a selected row of the selected subarray subsequent to performance of the operation thereon by the sensing circuitry. For example, the data value may be moved from the sensing circuitry 450 to be saved in the memory cell in the short digit line subarray 425 after the operation has been performed on the data value in the sensing circuitry 450. This may be the first time the data value is saved in the memory cell in the short digit line subarray 425 or the data value on which the operation was performed may be saved by overwriting the data value previously saved in the memory cell.
The cache controller 171 can be configured to direct movement, via the shared I/O line 455, of the data value on which the operation has been performed from the sensing circuitry 450 of the selected subarray in the first subset (e.g., a selected short digit line subarray 425) to a selected row in the selected subarray in the second subset (e.g., a selected long digit line subarray 426). A plurality of shared I/O lines 455-1, 455, 2, . . . , 455-M can be configured to selectably couple to the sensing circuitry 450 of the plurality of subarrays to selectably enable parallel movement of a plurality of data values stored in a row of the second subset to a corresponding plurality of sense amplifiers and/or compute components in selectably coupled sensing circuitry of the first subset. The plurality of shared I/O lines 455-1, 455, 2, . . . , 455-M can, in some embodiments, be configured to selectably couple to the sensing circuitry 450 of the plurality of subarrays to selectably enable parallel movement of a plurality of data values to selectably coupled sensing circuitry of the first subset from a corresponding plurality of sense amplifiers that sense the plurality of data values stored in a row of the second subset. In some embodiments, the plurality of sense amplifiers can be included without coupled compute components in the sensing circuitry for the second subset. The number of a plurality of shared I/O lines can, in some embodiments, correspond to a number of bits wide shared I/O line.
The sensing circuitry 450 described herein can be included in a plurality of sensing component stripes 424-0, . . . , 424-N−1 and each sensing component stripe can be physically associated with a respective subarray 425-0, . . . , 426-N−1 of the first and second subsets of the plurality of subarrays in the bank. A number of a plurality of sensing component stripes in a bank of the memory device can correspond to a number of the plurality of subarrays in the first and second subsets in the bank. Each sensing component stripe can be coupled to the respective subarray of the first and second subsets of the plurality of subarrays and the I/O line can be selectably shared by the sensing circuitry 450 in a coupled pair of the plurality of sensing component stripes.
As shown in sensing component stripe 424-0 associated with short digit line subarray 425-0, a sensing component stripe can be configured to include a number of a plurality of sense amplifiers 406 and compute components 431 that corresponds to a number of a plurality of columns 422 of the memory cells in the first subset configured for cache operations. The number of sense amplifiers and compute components in the sensing component stripe 424-0 can be selectably coupled to a shared I/O line (e.g., each of the respective sense amplifiers and/or compute components can be selectably coupled to one of shared I/O lines 455-1, 455, 2, . . . , 455-M).
As shown in sensing component stripe 424-N−1 associated with long digit line subarray 426-N−1, a sensing component stripe can be configured to include a number of a plurality of sense amplifiers 406 (e.g., without compute components) that corresponds to a number of a plurality of columns 422 of the memory cells in the second subset configured for data storage. The number of sense amplifiers in the sensing component stripe 424-N−1 can be selectably coupled to a shared I/O line (e.g., each of the respective sense amplifiers can be selectably coupled to one of shared I/O lines 455-1, 455, 2, . . . , 455-M).
In some embodiments, the first subset (e.g., short digit line subarrays 425) of the plurality of subarrays can be a number of subarrays of PIM DRAM cells. By comparison, in some embodiments, the second subset (e.g., long digit line subarrays 426) of the plurality of subarrays can be, or can include, a number of subarrays of memory cells other than PIM DRAM cells. For example, as previously described, the memory cells of the second subset can be associated with sensing circuitry formed without compute components, such that the processing functionality is reduced or eliminated. Alternatively or in addition, memory cells of a type or types other than DRAM may be utilized in the long digit line subarrays for storage of data.
In various embodiments, as shown in
The memory device 120 described herein can include the first subset of a plurality of subarrays, the second subset of the plurality of subarrays, and a plurality of partitions (e.g., 128-0, 128-1, . . . , 128-M−1 in
The cache controller 171 can, in some embodiments, be configured to selectably direct the isolation circuitry to disconnect the first portion of the shared I/O line from the second portion of the shared I/O line during parallel directed data movements, where a first directed data movement is within the first partition and a second directed data movement is within the second partition. For example, the first directed data movement, via the first portion of the shared I/O line (e.g., corresponding to partition 128-0), can be from a first subarray in the second subset (e.g., long digit line subarray 126-0) to a first subarray in the first subset (e.g., short digit line subarray 125-0). The second directed data movement, via the second portion of the shared I/O line (e.g., corresponding to partition 128-1), can be from a second subarray in the second subset (e.g., long digit line subarray 126-2 (not shown)) to a second subarray in the first subset (e.g., short digit line subarray 125-2).
A third directed data movement, via the first portion of the shared I/O line (e.g., corresponding to partition 128-0), can be from a first subarray in the first subset (e.g., short digit line subarray 125-0), subsequent to performance of an operation by sensing circuitry of the first subarray (e.g., in sensing component stripe 124-0) on a first data value, to a first subarray in the second subset (e.g., long digit line subarray 126-0). A fourth directed data movement, via the second portion of the shared I/O line (e.g., corresponding to partition 128-1), can be from a second subarray in the first subset (e.g., short digit line subarray 125-2), subsequent to performance of an operation by sensing circuitry of the second subarray (e.g., in sensing component stripe 124-2) on a second data value, to a second subarray in the second subset (e.g., long digit line subarray 126-2 (not shown)). For example, the third directed data movement can be within the first partition (e.g., 128-0) and the fourth directed data movement can be (e.g., performed in parallel) within the second partition (e.g., 128-1).
In some embodiments, data values on which an operation has been performed in a short digit line cache subarray can be returned to the same long digit line storage subarray from which the data values were originally sent and/or the data values on which the operation has been performed can be returned for storage in a long digit line subarray that is different from the storage subarray from which the data values were originally sent. For example, the third directed data movement described below can correspond to a fifth directed data movement and the fourth directed data movement described below can correspond to a sixth directed data movement when the respective data values are also returned to the long digit line subarrays from which the data values were originally sent, as just described. Hence, the data values on which the operation has been performed can be returned for storage in more than one long digit line subarray.
As such, a third directed data movement, via the first portion of the shared I/O line (e.g., corresponding to partition 128-0), can be from a first subarray in the first subset (e.g., short digit line subarray 125-0), subsequent to performance of an operation by sensing circuitry of the first subarray on a first data value, to a third subarray in the second subset (e.g., long digit line subarray 126-1). In some embodiments, a fourth directed data movement, via the second portion of the shared I/O line (e.g., corresponding to partition 128-1), can be from a second subarray in the first subset (e.g., short digit line subarray 125-2), subsequent to performance of an operation by sensing circuitry of the second subarray on a second data value, to a fourth subarray in the second subset (e.g., long digit line subarray 126-2 (not shown)). For example, the third directed data movement can be within the first partition (e.g., 128-0) and the fourth directed data movement can be (e.g., performed in parallel) within the second partition (e.g., 128-1).
The cache controller 171 can, in various embodiments, be configured to selectably direct the isolation circuitry to connect the first portion (e.g., corresponding to partition 128-0) to the second portion (e.g., corresponding to any partition 128-1, . . . , 128-M−1) during a directed data movement. The directed data movement, via the connected first and second portions of the shared I/O line, can be from a subarray in the second subset in the second portion (e.g., long digit line subarray 126-N−1) to a subarray in the first subset in the first portion (e.g., short digit line subarray 125-0). The cache controller 171 also can, in various embodiments, be configured to selectably direct the isolation circuitry to connect the first portion to the second portion during a directed data movement, where the directed data movement, via the connected first and second portions of the shared I/O line, can be from the subarray in the first subset in the first portion (e.g., short digit line subarray 125-0), subsequent to performance of an operation on a data value, to a subarray in the second subset in the second portion (e.g., long digit line subarray 126-N−1 from which the data value was originally sent and/or to any other long digit line subarray in partitions 128-1, . . . , 128-M−1).
The number of subarrays can, in various embodiments, may differ between a plurality of partitions in a bank and/or between banks. The ratio of long digit line subarrays to short digit line subarrays, or whether either type of subarray is present in a partition before connection of partitions, also may differ between a plurality of partitions in a bank and/or between banks.
As described herein, a sensing component stripe (e.g., 424-N−1) can include a number of sense amplifiers configured to move an amount of data sensed from a row (e.g., one or more of rows 118) of a first subarray in the second subset (e.g., long digit line subarray 426-N−1) in parallel to a plurality of shared I/O lines (e.g., 455-1, 455-2, . . . , 455-M), where the amount of data corresponds to at least a thousand bit width of the plurality of shared I/O lines. A sensing component stripe (e.g., 424-0) associated with a first subarray in the first subset (e.g., short digit line subarray 425-0) can include a number of sense amplifiers 406 and compute components 431 configured to receive (e.g., cache) an amount of data sensed from the row of the first subarray in the second subset and moved in parallel via the plurality of shared I/O lines. The cache controller 171 can be configured to direct performance of an operation on at least one data value in the received amount of data by at least one compute component in the sensing component stripe associated with short digit line subarray.
Although the description herein has referred to a few portions and partitions for purposes of clarity, the apparatuses and methods presented herein can be adapted to any number of portions of the shared I/O lines, partitions, subarrays, and/or rows therein. For example, the controller 140 and/or the cache controller 171 can send signals to direct connection and disconnection via the isolation circuitry of respective portions of the shared I/O lines from a first subarray in a bank to a last subarray in the bank to enable data movement from a subarray in any partition to a subarray in any other partition (e.g., the partitions can be adjacent and/or separated by a number of other partitions). In addition, although two disconnected portions of the shared I/O lines were described to enable parallel data movement within two respective paired partitions, the controller 140 and/or the cache controller 171 can send signals to direct connection and disconnection via the isolation circuitry of any number of portions of the shared I/O lines to enable parallel data movement within any number of respective paired partitions. Moreover, the data can be selectably moved in parallel in the respective portions of the shared I/O lines in either of the first direction and/or the second direction.
As described herein, a method is provided for operating a memory device 120 to perform cache operations by execution of non-transitory instructions by a processing resource. The method can include sensing a data value in a selected memory cell in a selected first row (e.g., one or more of rows 118) of a selected first subarray (e.g., long digit line subarray 426-N−1) in a bank 121 of the memory device. The sensed data value can be moved to a sensing component stripe (e.g., 424-0) coupled to a selected second subarray (e.g., short digit line subarray 425-0) in the bank. In some embodiments, the selected second subarray can be configured with a number of memory cells in a column of the selected second subarray that is at most half of a number of memory cells in a column of the selected first subarray. An operation can be performed on the sensed data value in the sensing component stripe coupled to the selected second subarray. The data value on which the operation has been performed can be moved from the sensing component stripe (e.g., 424-0) to a memory cell in a selected row of a selected subarray.
The data value on which the operation has been performed can be, in various embodiments, selectably moved to a number of locations, where the data value being moved to one location does not preclude the data value being moved to one or more other locations. For instance, the data value can be moved from the sensing component stripe (e.g., 424-0) to the selected memory cell in the selected first row of the selected first subarray in a same bank of the memory device. For example, the data value on which the operation has been performed can be returned to the memory cell from which it was originally sent. The data value can be moved from the sensing component stripe to a selected memory cell in a selected second row of the selected first subarray in the same bank. For example, the data value can be returned to a memory cell in a different row in the subarray from which it was sent. The data value can be moved from the sensing component stripe to a selected memory cell in a selected row of a selected second subarray in the same bank. For example, the data value can be returned to a memory cell in a row of a subarray that is a different subarray from which it was sent.
The data value can be moved from the sensing component stripe to a selected memory cell in each of a plurality of selected rows of the selected first subarray in the same bank. For example, the data value can be returned to a memory cell in each of more than one row in the subarray from which it was sent. The data value can be moved from the sensing component stripe to a selected memory cell in each of a plurality of selected rows, where each selected row is in a respective subarray of a plurality of subarrays in the same bank. For example, the data value can be returned to a memory cell in each of more than one row, where each row is in a different subarray in the bank from which it was sent.
In some embodiments, the data value can be moved from the sensing component stripe to a selected memory cell in a selected row of a selected subarray in a different bank. For example, the data value on which the operation has been performed can be returned to a memory cell in a subarray that is in a different bank of the memory device from which it was sent. Although movement of data values via the shared I/O line may be within the same bank, the connection circuitry 232-1 and 232-2 described in connection with
As described herein, the method can, in some embodiments, include storing the sensed data value in a first sensing component stripe (e.g., 424-N−1) coupled to the selected first subarray (e.g., 426-N−1). The sensed data value can be moved from the first sensing component stripe to a second sensing component stripe (e.g., 424-0) coupled to the selected second subarray (e.g., 425-0). The sensed data value can be stored in a memory cell in a selected second row (e.g., one or more of rows 119) of the selected second subarray. In various embodiments, the sensed data value can be saved in the selected second subarray prior to and/or subsequent to performance of the operation thereon.
The method can include performing a plurality (e.g., a sequence) of operations on the sensed data value in the sensing component stripe coupled to the selected second subarray. For example, a number of data values can be moved from a row of a long digit line subarray (e.g., 426-N−1) to a short digit line subarray (e.g., 425-0) for performance of a sequence of operations with a speed, rate, and/or efficiency that is improved relative to a long digit line subarray. Each operation may be performed in the short digit line subarray with the improved speed, rate, and/or efficiency and that advantage may be proportionally increased with each additional operation in the sequence of operations. The data value on which the plurality of operations has been performed can be moved from the sensing component stripe to a memory cell in a selected row of a selected subarray in a number of locations, as described herein.
The method can, in some embodiments, include selectably coupling a first sensing component stripe (e.g., 424-N−1) coupled to the selected first subarray (e.g., 426-N−1) and a second sensing component stripe (e.g., 424-0) coupled to the selected second subarray (e.g., 425-0) via an I/O line (e.g., 455-1) shared by the first and second sensing component stripes. The method can include moving, via the shared I/O line, the sensed data value from the first sensing component stripe coupled to the selected first subarray to the second sensing component stripe coupled to the selected second subarray. The method can, in various embodiments as described herein, include moving, via a shared I/O line (e.g., which may be different from the previous shared I/O line), the data value on which the operation has been performed from the second sensing component stripe coupled to the selected second subarray (e.g., 425-0) to the first sensing component stripe coupled to the selected first subarray (e.g., one or more subarrays selected from 426-0, . . . , 426-N−1). The data value on which the operation has been performed can be written to at least one selected memory cell of at least one selected row of the selected first subarray.
While example embodiments including various combinations and configurations of controller, cache controller, short digit line subarrays, long digit line subarrays, sensing circuitry, sense amplifiers, compute components, sensing component stripes, shared I/O lines, column select circuitry, multiplexers, connection circuitry, isolation stripes, etc., have been illustrated and described herein, embodiments of the present disclosure are not limited to those combinations explicitly recited herein. Other combinations and configurations of the controller, cache controller, short digit line subarrays, long digit line subarrays, sensing circuitry, sense amplifiers, compute components, sensing component stripes, shared I/O lines, column select circuitry, multiplexers, connection circuitry, isolation stripes, etc., disclosed herein are expressly included within the scope of this disclosure.
Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and processes are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.
In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
This application is a Continuation of U.S. application Ser. No. 16/679,553, filed on Nov. 11, 2019, which will issue as U.S. Pat. No. 11,126,557 on Sep. 21, 2021, which is a Continuation of U.S. application Ser. No. 15/081,492, filed on Mar. 25, 2016, which issued as U.S. Pat. No. 10,474,581 on Nov. 12, 2019, the contents of which are included herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4380046 | Fung | Apr 1983 | A |
4435792 | Bechtolsheim | Mar 1984 | A |
4435793 | Ochii | Mar 1984 | A |
4727474 | Batcher | Feb 1988 | A |
4843264 | Galbraith | Jun 1989 | A |
4958378 | Bell | Sep 1990 | A |
4977542 | Matsuda et al. | Dec 1990 | A |
5023838 | Herbert | Jun 1991 | A |
5034636 | Reis et al. | Jul 1991 | A |
5201039 | Sakamura | Apr 1993 | A |
5210850 | Kelly et al. | May 1993 | A |
5253308 | Johnson | Oct 1993 | A |
5276643 | Hoffmann et al. | Jan 1994 | A |
5325519 | Long et al. | Jun 1994 | A |
5367488 | An | Nov 1994 | A |
5379257 | Matsumura et al. | Jan 1995 | A |
5386379 | Ali-Yahia et al. | Jan 1995 | A |
5398213 | Yeon et al. | Mar 1995 | A |
5440482 | Davis | Aug 1995 | A |
5446690 | Tanaka et al. | Aug 1995 | A |
5473576 | Matsui | Dec 1995 | A |
5481500 | Reohr et al. | Jan 1996 | A |
5485373 | Davis et al. | Jan 1996 | A |
5506811 | McLaury | Apr 1996 | A |
5519847 | Fandrich et al. | May 1996 | A |
5615404 | Knoll et al. | Mar 1997 | A |
5638128 | Hoogenboom | Jun 1997 | A |
5638317 | Tran | Jun 1997 | A |
5654936 | Cho | Aug 1997 | A |
5663922 | Tailliet | Sep 1997 | A |
5678021 | Pawate et al. | Oct 1997 | A |
5724291 | Matano | Mar 1998 | A |
5724366 | Furutani | Mar 1998 | A |
5751987 | Mahant-Shetti et al. | May 1998 | A |
5787458 | Miwa | Jul 1998 | A |
5854636 | Watanabe et al. | Dec 1998 | A |
5867429 | Chen et al. | Feb 1999 | A |
5870504 | Nemoto et al. | Feb 1999 | A |
5915084 | Wendell | Jun 1999 | A |
5935263 | Keeth et al. | Aug 1999 | A |
5986942 | Sugibayashi | Nov 1999 | A |
5991209 | Chow | Nov 1999 | A |
5991785 | Alidina et al. | Nov 1999 | A |
5991861 | Young | Nov 1999 | A |
6005799 | Rao | Dec 1999 | A |
6009020 | Nagata | Dec 1999 | A |
6092186 | Betker et al. | Jul 2000 | A |
6122211 | Morgan et al. | Sep 2000 | A |
6125071 | Kohno et al. | Sep 2000 | A |
6134164 | Lattimore et al. | Oct 2000 | A |
6147514 | Shiratake | Nov 2000 | A |
6151244 | Fujino et al. | Nov 2000 | A |
6157578 | Brady | Dec 2000 | A |
6163862 | Adams et al. | Dec 2000 | A |
6166942 | Vo et al. | Dec 2000 | A |
6172918 | Hidaka | Jan 2001 | B1 |
6175514 | Henderson | Jan 2001 | B1 |
6181698 | Hariguchi | Jan 2001 | B1 |
6208544 | Beadle et al. | Mar 2001 | B1 |
6226215 | Yoon | May 2001 | B1 |
6301153 | Takeuchi et al. | Oct 2001 | B1 |
6301164 | Manning et al. | Oct 2001 | B1 |
6304477 | Naji | Oct 2001 | B1 |
6389507 | Sherman | May 2002 | B1 |
6418498 | Martwick | Jul 2002 | B1 |
6466499 | Blodgett | Oct 2002 | B1 |
6510098 | Taylor | Jan 2003 | B1 |
6538928 | Mobley | Mar 2003 | B1 |
6563754 | Lien et al. | May 2003 | B1 |
6578058 | Nygaard | Jun 2003 | B1 |
6731542 | Le et al. | May 2004 | B1 |
6754746 | Leung et al. | Jun 2004 | B1 |
6768679 | Le et al. | Jul 2004 | B1 |
6807614 | Chung | Oct 2004 | B2 |
6816422 | Hamade et al. | Nov 2004 | B2 |
6819612 | Achter | Nov 2004 | B1 |
6894549 | Eliason | May 2005 | B2 |
6943579 | Hazanchuk et al. | Sep 2005 | B1 |
6948056 | Roth et al. | Sep 2005 | B1 |
6950771 | Fan et al. | Sep 2005 | B1 |
6950898 | Merritt et al. | Sep 2005 | B2 |
6956770 | Khalid et al. | Oct 2005 | B2 |
6961272 | Schreck | Nov 2005 | B2 |
6965648 | Smith et al. | Nov 2005 | B1 |
6985394 | Kim | Jan 2006 | B2 |
6987693 | Cernea et al. | Jan 2006 | B2 |
7020017 | Chen et al. | Mar 2006 | B2 |
7028170 | Saulsbury | Apr 2006 | B2 |
7045834 | Tran et al. | May 2006 | B2 |
7054178 | Shiah et al. | May 2006 | B1 |
7061817 | Raad et al. | Jun 2006 | B2 |
7079407 | Dimitrelis | Jul 2006 | B1 |
7173857 | Kato et al. | Feb 2007 | B2 |
7187585 | Li et al. | Mar 2007 | B2 |
7196928 | Chen | Mar 2007 | B2 |
7260565 | Lee et al. | Aug 2007 | B2 |
7260672 | Garney | Aug 2007 | B2 |
7372715 | Han | May 2008 | B2 |
7400532 | Aritome | Jul 2008 | B2 |
7406494 | Magee | Jul 2008 | B2 |
7447720 | Beaumont | Nov 2008 | B2 |
7454451 | Beaumont | Nov 2008 | B2 |
7457181 | Lee et al. | Nov 2008 | B2 |
7535769 | Cernea | May 2009 | B2 |
7546438 | Chung | Jun 2009 | B2 |
7562198 | Noda et al. | Jul 2009 | B2 |
7574466 | Beaumont | Aug 2009 | B2 |
7602647 | Li et al. | Oct 2009 | B2 |
7663928 | Tsai et al. | Feb 2010 | B2 |
7685365 | Rajwar et al. | Mar 2010 | B2 |
7692466 | Ahmadi | Apr 2010 | B2 |
7752417 | Manczak et al. | Jul 2010 | B2 |
7764558 | Abe et al. | Jul 2010 | B2 |
7791962 | Noda et al. | Sep 2010 | B2 |
7796453 | Riho et al. | Sep 2010 | B2 |
7805587 | Van Dyke et al. | Sep 2010 | B1 |
7808854 | Takase | Oct 2010 | B2 |
7827372 | Bink et al. | Nov 2010 | B2 |
7869273 | Lee et al. | Jan 2011 | B2 |
7898864 | Dong | Mar 2011 | B2 |
7924628 | Danon et al. | Apr 2011 | B2 |
7937535 | Ozer et al. | May 2011 | B2 |
7957206 | Bauser | Jun 2011 | B2 |
7979667 | Allen et al. | Jul 2011 | B2 |
7996749 | Ding et al. | Aug 2011 | B2 |
8042082 | Solomon | Oct 2011 | B2 |
8045391 | Mokhlesi | Oct 2011 | B2 |
8059438 | Chang et al. | Nov 2011 | B2 |
8095825 | Hirotsu et al. | Jan 2012 | B2 |
8117462 | Snapp et al. | Feb 2012 | B2 |
8164942 | Gebara et al. | Apr 2012 | B2 |
8208328 | Hong | Jun 2012 | B2 |
8213248 | Moon et al. | Jul 2012 | B2 |
8223568 | Seo | Jul 2012 | B2 |
8238173 | Akerib et al. | Aug 2012 | B2 |
8274841 | Shimano et al. | Sep 2012 | B2 |
8279683 | Klein | Oct 2012 | B2 |
8310884 | Iwai et al. | Nov 2012 | B2 |
8332367 | Bhattacherjee et al. | Dec 2012 | B2 |
8339824 | Cooke | Dec 2012 | B2 |
8339883 | Yu et al. | Dec 2012 | B2 |
8347154 | Bahali et al. | Jan 2013 | B2 |
8351292 | Matano | Jan 2013 | B2 |
8356144 | Hessel et al. | Jan 2013 | B2 |
8417921 | Gonion et al. | Apr 2013 | B2 |
8462532 | Argyres | Jun 2013 | B1 |
8484276 | Carlson et al. | Jul 2013 | B2 |
8495438 | Roine | Jul 2013 | B2 |
8503250 | Demone | Aug 2013 | B2 |
8526239 | Kim | Sep 2013 | B2 |
8533245 | Cheung | Sep 2013 | B1 |
8555037 | Gonion | Oct 2013 | B2 |
8599613 | Abiko et al. | Dec 2013 | B2 |
8605015 | Guttag et al. | Dec 2013 | B2 |
8625376 | Jung et al. | Jan 2014 | B2 |
8644101 | Jun et al. | Feb 2014 | B2 |
8650232 | Stortz et al. | Feb 2014 | B2 |
8873272 | Lee | Oct 2014 | B2 |
8964496 | Manning | Feb 2015 | B2 |
8971124 | Manning | Mar 2015 | B1 |
9015390 | Klein | Apr 2015 | B2 |
9047193 | Lin et al. | Jun 2015 | B2 |
9165023 | Moskovich et al. | Oct 2015 | B2 |
9524771 | Sriramagiri et al. | Dec 2016 | B2 |
9606907 | Lee et al. | Mar 2017 | B2 |
20010007112 | Porterfield | Jul 2001 | A1 |
20010008492 | Higashiho | Jul 2001 | A1 |
20010010057 | Yamada | Jul 2001 | A1 |
20010028584 | Nakayama et al. | Oct 2001 | A1 |
20010043089 | Forbes et al. | Nov 2001 | A1 |
20020059355 | Peleg et al. | May 2002 | A1 |
20030167426 | Slobodnik | Sep 2003 | A1 |
20030222879 | Lin et al. | Dec 2003 | A1 |
20040073592 | Kim et al. | Apr 2004 | A1 |
20040073773 | Demjanenko | Apr 2004 | A1 |
20040085840 | Vali et al. | May 2004 | A1 |
20040095826 | Perner | May 2004 | A1 |
20040154002 | Ball et al. | Aug 2004 | A1 |
20040205289 | Srinivasan | Oct 2004 | A1 |
20040240251 | Nozawa et al. | Dec 2004 | A1 |
20050015557 | Wang et al. | Jan 2005 | A1 |
20050078514 | Scheuerlein et al. | Apr 2005 | A1 |
20050097417 | Agrawal et al. | May 2005 | A1 |
20050111275 | Kiehl | May 2005 | A1 |
20050146975 | Halbert et al. | Jul 2005 | A1 |
20060047937 | Selvaggi et al. | Mar 2006 | A1 |
20060069849 | Rudelic | Mar 2006 | A1 |
20060146623 | Mizuno et al. | Jul 2006 | A1 |
20060149804 | Luick et al. | Jul 2006 | A1 |
20060181917 | Kang et al. | Aug 2006 | A1 |
20060215432 | Wickeraad et al. | Sep 2006 | A1 |
20060225072 | Lari et al. | Oct 2006 | A1 |
20060291282 | Liu et al. | Dec 2006 | A1 |
20070103986 | Chen | May 2007 | A1 |
20070171747 | Hunter et al. | Jul 2007 | A1 |
20070180006 | Gyoten et al. | Aug 2007 | A1 |
20070180184 | Sakashita et al. | Aug 2007 | A1 |
20070195602 | Fong et al. | Aug 2007 | A1 |
20070285131 | Sohn | Dec 2007 | A1 |
20070285979 | Turner | Dec 2007 | A1 |
20070291532 | Tsuji | Dec 2007 | A1 |
20080025073 | Arsovski | Jan 2008 | A1 |
20080037333 | Kim et al. | Feb 2008 | A1 |
20080052711 | Forin et al. | Feb 2008 | A1 |
20080137388 | Krishnan et al. | Jun 2008 | A1 |
20080165601 | Matick et al. | Jul 2008 | A1 |
20080178053 | Gorman et al. | Jul 2008 | A1 |
20080215937 | Dreibelbis et al. | Sep 2008 | A1 |
20080285324 | Hattori et al. | Nov 2008 | A1 |
20090067218 | Graber | Mar 2009 | A1 |
20090154238 | Lee | Jun 2009 | A1 |
20090154273 | Borot et al. | Jun 2009 | A1 |
20090207679 | Takase | Aug 2009 | A1 |
20090254697 | Akerib | Oct 2009 | A1 |
20100067296 | Li | Mar 2010 | A1 |
20100091582 | Vali et al. | Apr 2010 | A1 |
20100172190 | Lavi et al. | Jul 2010 | A1 |
20100185904 | Chen | Jul 2010 | A1 |
20100210076 | Gruber et al. | Aug 2010 | A1 |
20100226183 | Kim | Sep 2010 | A1 |
20100283793 | Cameron et al. | Nov 2010 | A1 |
20100308858 | Noda et al. | Dec 2010 | A1 |
20100332895 | Billing et al. | Dec 2010 | A1 |
20110022786 | Yeh et al. | Jan 2011 | A1 |
20110051523 | Manabe et al. | Mar 2011 | A1 |
20110063919 | Chandrasekhar et al. | Mar 2011 | A1 |
20110093662 | Walker et al. | Apr 2011 | A1 |
20110103151 | Kim et al. | May 2011 | A1 |
20110119467 | Cadambi et al. | May 2011 | A1 |
20110122695 | Li et al. | May 2011 | A1 |
20110140741 | Zerbe et al. | Jun 2011 | A1 |
20110161590 | Guthrie et al. | Jun 2011 | A1 |
20110219260 | Nobunaga et al. | Sep 2011 | A1 |
20110267883 | Lee et al. | Nov 2011 | A1 |
20110317496 | Bunce et al. | Dec 2011 | A1 |
20120005397 | Lim et al. | Jan 2012 | A1 |
20120017039 | Margetts | Jan 2012 | A1 |
20120023281 | Kawasaki et al. | Jan 2012 | A1 |
20120120705 | Mitsubori et al. | May 2012 | A1 |
20120134216 | Singh | May 2012 | A1 |
20120134225 | Chow | May 2012 | A1 |
20120134226 | Chow | May 2012 | A1 |
20120140540 | Agam et al. | Jun 2012 | A1 |
20120182798 | Hosono et al. | Jul 2012 | A1 |
20120195146 | Jun et al. | Aug 2012 | A1 |
20120198310 | Tran et al. | Aug 2012 | A1 |
20120246380 | Akerib et al. | Sep 2012 | A1 |
20120265964 | Murata et al. | Oct 2012 | A1 |
20120281486 | Rao et al. | Nov 2012 | A1 |
20120303627 | Keeton et al. | Nov 2012 | A1 |
20130003467 | Klein | Jan 2013 | A1 |
20130061006 | Hein | Mar 2013 | A1 |
20130107623 | Kavalipurapu et al. | May 2013 | A1 |
20130117541 | Choquette et al. | May 2013 | A1 |
20130124783 | Yoon et al. | May 2013 | A1 |
20130132702 | Patel et al. | May 2013 | A1 |
20130138646 | Sirer et al. | May 2013 | A1 |
20130163362 | Kim | Jun 2013 | A1 |
20130173888 | Hansen et al. | Jul 2013 | A1 |
20130205114 | Badam et al. | Aug 2013 | A1 |
20130219112 | Okin et al. | Aug 2013 | A1 |
20130227361 | Bowers et al. | Aug 2013 | A1 |
20130279282 | Kim et al. | Oct 2013 | A1 |
20130283122 | Anholt et al. | Oct 2013 | A1 |
20130286705 | Grover et al. | Oct 2013 | A1 |
20130326154 | Haswell | Dec 2013 | A1 |
20130332707 | Gueron et al. | Dec 2013 | A1 |
20140146589 | Park et al. | May 2014 | A1 |
20140185395 | Seo | Jul 2014 | A1 |
20140215185 | Danielsen | Jul 2014 | A1 |
20140250279 | Manning | Sep 2014 | A1 |
20140344934 | Jorgensen | Nov 2014 | A1 |
20150029798 | Manning | Jan 2015 | A1 |
20150042380 | Manning | Feb 2015 | A1 |
20150063052 | Manning | Mar 2015 | A1 |
20150078108 | Cowles et al. | Mar 2015 | A1 |
20150120987 | Wheeler | Apr 2015 | A1 |
20150134713 | Wheeler | May 2015 | A1 |
20150221390 | Tokiwa | Aug 2015 | A1 |
20150270015 | Murphy et al. | Sep 2015 | A1 |
20150279466 | Manning | Oct 2015 | A1 |
20150324290 | Leidel | Nov 2015 | A1 |
20150325272 | Murphy | Nov 2015 | A1 |
20150347307 | Walker | Dec 2015 | A1 |
20150356009 | Wheeler et al. | Dec 2015 | A1 |
20150356022 | Leidel et al. | Dec 2015 | A1 |
20150357007 | Manning et al. | Dec 2015 | A1 |
20150357008 | Manning et al. | Dec 2015 | A1 |
20150357019 | Wheeler et al. | Dec 2015 | A1 |
20150357020 | Manning | Dec 2015 | A1 |
20150357021 | Hush | Dec 2015 | A1 |
20150357022 | Hush | Dec 2015 | A1 |
20150357023 | Hush | Dec 2015 | A1 |
20150357024 | Hush et al. | Dec 2015 | A1 |
20150357047 | Tiwari | Dec 2015 | A1 |
20160062672 | Wheeler | Mar 2016 | A1 |
20160062673 | Tiwari | Mar 2016 | A1 |
20160062692 | Finkbeiner et al. | Mar 2016 | A1 |
20160062733 | Tiwari | Mar 2016 | A1 |
20160063284 | Tiwari | Mar 2016 | A1 |
20160064045 | La Fratta | Mar 2016 | A1 |
20160064047 | Tiwari | Mar 2016 | A1 |
20160239278 | Che | Aug 2016 | A1 |
Number | Date | Country |
---|---|---|
102141905 | Aug 2011 | CN |
105161126 | Dec 2015 | CN |
105378847 | Mar 2016 | CN |
0214718 | Mar 1987 | EP |
2026209 | Feb 2009 | EP |
H0831168 | Feb 1996 | JP |
2009259193 | Mar 2015 | JP |
10-0211482 | Aug 1998 | KR |
10-2010-0134235 | Dec 2010 | KR |
10-2013-0049421 | May 2013 | KR |
2001065359 | Sep 2001 | WO |
2010079451 | Jul 2010 | WO |
2012019861 | Feb 2012 | WO |
2013062596 | May 2013 | WO |
2013081588 | Jun 2013 | WO |
2013095592 | Jun 2013 | WO |
2016144726 | Sep 2016 | WO |
Entry |
---|
International Search Report and Written Opinion for related PCT Application No. PCT/US2017/023159, dated Mar. 20, 2017, 13 pages. |
Office Action for related Taiwan Patent Application No. 106110040, dated Oct. 5, 2017, 15 pages. |
Office Action for related China Patent Application No. 201780019716.8, dated Dec. 2, 2020, 32 pgs. |
Boyd et al., “On the General Applicability of Instruction-Set Randomization”, Jul.-Sep. 2010, (14 pgs.), vol. 7, Issue 3, IEEE Transactions on Dependable and Secure Computing. |
Stojmenovic, “Multiplicative Circulant Networks Topological Properties and Communication Algorithms”, (25 pgs.), Discrete Applied Mathematics 77 (1997) 281-305. |
“4.9.3 MINLOC and MAXLOC”, Jun. 12, 1995, (5pgs.), Message Passing Interface Forum 1.1, retrieved from http://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html. |
Derby, et al., “A High-Performance Embedded DSP Core with Novel SIMD Features”, Apr. 6-10, 2003, (4 pgs), vol. 2, pp. 301-304, 2003 IEEE International Conference on Accoustics, Speech, and Signal Processing. |
Debnath, Biplob, Bloomflash: Bloom Filter on Flash-Based Storage, 2011 31st Annual Conference on Distributed Computing Systems, Jun. 20-24, 2011, 10 pgs. |
Pagiamtzis, Kostas, “Content-Addressable Memory Introduction”, Jun. 25, 2007, (6 pgs.), retrieved from: http://www.pagiamtzis.com/cam/camintro. |
Pagiamtzis, et al., “Content-Addressable Memory (CAM) Circuits and Architectures: A Tutorial and Survey”, Mar. 2006, (16 pgs.), vol. 41, No. 3, IEEE Journal of Solid-State Circuits. |
International Search Report and Written Opinion for PCT Application No. PCT/US2013/043702, dated Sep. 26, 2013, (11 pgs.). |
Elliot, et al., “Computational RAM: Implementing Processors in Memory”, Jan.-Mar. 1999, (10 pgs.), vol. 16, Issue 1, IEEE Design and Test of Computers Magazine. |
Dybdahl, et al., “Destructive-Read in Embedded DRAM, Impact on Power Consumption,” Apr. 2006, (10 pgs ), vol. 2, Issue 2, Journal of Embedded Computing-Issues in embedded single-chip multicore architectures. |
Kogge, et al., “Processing In Memory: Chips to Petaflops,” May 23, 1997, (8 pgs.), retrieved from: http://www.es.ucf.edu/courses/cda5106/summer02/papers/kogge97PIM.pdf. |
Draper, et al., “The Architecture of the DIVA Processing-In-Memory Chip,” Jun. 22-26, 2002, (12 pgs.), ICS '02, retrieved from: http://www.isi.edu/˜draper/papers/ics02.pdf. |
Adibi, et al., “Processing-In-Memory Technology for Knowledge Discovery Algorithms,” Jun. 25, 2006, (10 pgs.), Proceeding of the Second International Workshop on Data Management on New Hardware, retrieved from: http://www.cs.cmu.edu/˜damon2006/pdf/adibil6inmemory.pdf. |
U.S. Appl. No. 13/449,082, entitled, “Methods and Apparatus for Pattern Matching,” filed Apr. 17, 2012, (37 pgs ). |
U.S. Appl. No. 13/743,686, entitled, “Weighted Search and Compare in a Memory Device,” filed Jan. 17, 2013, (25 pgs). |
U.S. Appl. No. 13/774,636, entitled, “Memory as a Programmabe Logic Device,” filed Feb. 22, 2013, (30 pgs). |
U.S. Appl. No. 13/774,553, entitled, “Neural Network in a Memory Device,” filed Feb. 22, 2013, (63 pgs). |
U.S. Appl. No. 13/796,189, entitled, “Performing Complex Arithmetic Functions in a Memory Device,” filed Mar. 12, 2013, (23 pgs.). |
Number | Date | Country | |
---|---|---|---|
20220004497 A1 | Jan 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16679553 | Nov 2019 | US |
Child | 17479853 | US | |
Parent | 15081492 | Mar 2016 | US |
Child | 16679553 | US |