The present disclosure generally relates to semiconductor memory devices, and more particularly relates to forming wrap-around arrays in dynamic random-access memory (DRAM) semiconductor devices.
An apparatus (e.g., a processor, a memory system, and/or other electronic apparatus) can include one or more semiconductor circuits configured to store and/or process information. For example, the apparatus can include a memory device, such as a volatile memory device, a non-volatile memory device, or a combination device. Memory devices, such as dynamic random-access memory (DRAM), can utilize electrical energy to store and access data.
The drawings illustrate only example embodiments and are therefore not to be considered limiting in scope. The elements and features shown in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the example embodiments. Additionally, certain dimensions or placements may be exaggerated to help visually convey such principles. In the drawings, the same reference numerals used in different embodiments designate like or corresponding, but not necessarily identical, elements.
A semiconductor memory device typically consists of an array of memory cells that are arranged in rows and columns, each one of the memory cells being configured to store a single bit of data. Word lines and bit lines are used across the memory array, connecting each row and each column of the memory cells. For example, the word lines can be disposed horizontally across the memory array and configured to select a particular row of memory arrays for data read or writing operation. Each word line is connected to a corresponding memory section. The bit lines (also referred to as digit lines) can be disposed vertically down the memory array to read or write the data stored in corresponding memory cells. As memory technology has advanced, the size of memory cells has shrunk to increase the density of the memory array. This has led to shrinking of word line and bit lines as well, which has allowed larger or higher capacity memory arrays to be packed in a same amount of space. Even though advanced photolithography techniques have been developed to overcome reticle field constraints to pack more memory cells into a smaller area, it is hard to overcome certain form factor constraints in one direction on memory devices. For example, due to the asymmetrical nature of word line and bit line shrink, standard memory array sizes may exceed dimensions constraints in memory devices. That is, conventionally-arranged memory arrays of a particular capacity may fit within an allowed size in one dimension, but exceed an allowed size in another dimension. In addition, semiconductor manufacturers have been raising the number of memory cells along the bit line direction to increase memory device storage, which has led to manufacturing complexity. The shrinking of memory cells along one direction can also cause memory device electrical performance degradation because the resistance and capacitance of memory cell may change with the size of the memory cell.
To address the above-described challenges and others, the present technology discloses semiconductor memory devices with wrap-around memory arrays. As described herein, the semiconductor memory devices with wrap-around arrays better utilize space available in certain dimensions (e.g., in the horizontal plane) to achieve a high storage volume within allowable constraints (e.g., in the vertical plane). In the disclosed wrap-around memory array architecture, a memory array is formed from one or more memory mats, where each memory mat includes memory cells arranged in rows/columns and its own word lines and bit lines. In addition, each one of the memory mats is coupled with a dedicated address decoder configured for data read and write operations. The memory array may be formed from multiple (e.g., two or more) memory mats aligned horizontally along the word line direction. Further, the wrap-around memory array may include one or more memory array segments. As described below, each segment represents a logical portion of the memory array. In contrast, each mat represents a physical portion of the memory array. Further, each segment may be physically distributed across multiple memory mats, and each memory mat may be associated with (e.g., contain the data for) multiple segments. Further, the number of memory mats and segments forming the array need not to be (but may be) the same. For example, a first memory mat may be associated with a first segment and a second segment, a second memory mat may be associated with the second segment and a third segment, and a third memory mat may be associated with a third segment and a fourth segment, etc. Furthermore, different segments may be associated with each other, such that when performing operations involving one segment, the operation are also be performed on another segment (e.g., a read from or write to one segment also causes a read from or write to the associated segment). The associated segments may be disposed in different memory mats to prevent data contention. For example, a memory device operation may involve a segment (e.g., the second segment) in the first memory mat as well as an associated segment (e.g., the fourth segment) in the third memory mat. During the memory device operation, word lines of corresponding memory sections disposed in the paired memory array segments can be activated for data read or write operations. Further, the word lines activated for paired segments, contained in different mats, may not be in the same vertical plane (e.g., sections associated different rows may be activated). In this wrap-around memory array architecture, the number of memory cells included in each memory mat can be similar or less than that of traditional memory device design in order to control the manufacturing cost and maintain memory device electrical performances. Moreover, in the wrap-around memory array architecture, the multiple memory mats aligned along a horizontal direction within the memory device could greatly raise a total number of memory cells, without violating requirements in the vertical direction, therefore achieving an enhanced memory storage volume for the memory device.
The apparatus 100 may include an array of memory cells, such as memory array 150. The memory array 150 may include a plurality of banks (e.g., banks 0-15), and each bank may include a plurality of word lines (WL), a plurality of bit lines (BL), and a plurality of memory cells arranged at intersections of the word-lines and the bit lines. Memory cells can include any one of a number of different memory media types, including capacitive, magneto resistive, ferroelectric, phase change, or the like. The selection of a word-line WL may be performed by a row decoder 140, and the selection of a bit line BL may be performed by a column decoder 145. Sense amplifiers (SAMP) may be provided for corresponding bit lines BL and connected to at least one respective local input/output (IO) line pair (LIOT/B), which may in turn be coupled to at least a respective one main IO line pair (MIOT/B), via transfer gates (TG), which can function as switches. The sense amplifiers and transfer gates may be operated based on control signals from decoder circuitry, which may include the command decoder 115, the row decoders 140, the column decoders 145, any control circuitry of the memory array 150, or any combination thereof. The memory array 150 may also include plate lines and corresponding circuitry for managing their operation. Furthermore, the memory array 150 may be a wrap-around memory array, configured according to the wrap-around memory array architecture, introduced in the present technology and described herein. For example, the memory array 150 and/or banks of the memory array can include two or more memory mats formed along the word line direction of the memory array 150. The apparatus 100 can further include multiple decoders (e.g., row decoders 140 and/or column decoders 145), where each decoder is associated with and decodes for a location for a memory mat of the memory array 150. The different memory mat-associated decoders enable a single operation (e.g., a host request to read data from a memory address) to access different memory mats, and further different sections and/or rows within those memory mats. As described herein, the memory array 150 implemented as a wrap-around memory array enables a high storage volume through utilizing available space in the horizontal plane of a memory die in which the memory array 150 is assembled.
The apparatus 100 may employ a plurality of external terminals that include command and address terminals coupled to a command bus and an address bus to receive command signals (CMD) and address signals (ADDR), respectively. The apparatus 100 may further include a chip select terminal to receive a chip select signal (CS), clock terminals to receive clock signals CK and CKF, data clock terminals to receive data clock signals WCK and WCKF, data terminals DQ, RDQS, DBI, and DMI, and power supply terminals VDD, VSS, and VDDQ.
The command terminals and address terminals may be supplied with an address signal and a bank address signal (not shown in
The command and address terminals may be supplied with command signals (CMD), address signals (ADDR), and chip select signals (CS), from a memory controller and/or a chipset. The command signals may represent various memory commands from the memory controller (e.g., including access commands, which can include read commands and write commands). The chip select signal may be used to select the apparatus 100 to respond to commands and addresses provided to the command and address terminals. When an active chip select signal is provided to the apparatus 100, the commands and addresses can be decoded, and memory operations can be performed. The command signals may be provided as internal command signals ICMD to a command decoder 115 via the command/address input circuit 105. The command decoder 115 may include circuits to decode the internal command signals ICMD to generate various internal signals and commands for performing memory operations, for example, a row command signal to select a word-line and a column command signal to select a bit line. The command decoder 115 may further include one or more registers for tracking various counts or values (e.g., counts of refresh commands received by the apparatus 100 or self-refresh operations performed by the apparatus 100).
Read data can be read from memory cells in the memory array 150 designated by row address (e.g., address provided with an active command) and column address (e.g., address provided with the read). The read command may be received by the command decoder 115, which can provide internal commands to input/output circuit 160 so that read data can be output from the data terminals DQ, RDQS, DBI, and DMI via read/write amplifiers 155 and the input/output circuit 160 according to the RDQS clock signals. The read data may be provided at a time defined by read latency information RL that can be programmed in the apparatus 100, for example, in a mode register (not shown in
Write data can be supplied to the data terminals DQ, DBI, and DMI according to the WCK and WCKF clock signals. The write command may be received by the command decoder 115, which can provide internal commands to the input/output circuit 160 so that the write data can be received by data receivers in the input/output circuit 160 and supplied via the input/output circuit 160 and the read/write amplifiers 155 to the memory array 150. The write data may be written in the memory cell designated by the row address and the column address. The write data may be provided to the data terminals at a time that is defined by write latency WL information. The write latency WL information can be programmed in the apparatus 100, for example, in the mode register. The write latency WL information can be defined in terms of clock cycles of the CK clock signal. For example, the write latency information WL can be a number of clock cycles of the CK signal after the write command is received by the apparatus 100 when the associated write data is received.
As described herein, read data requested by a read command, and write data supplied with a write command, may be read from or written two multiple memory mats forming the memory array 150 and/or memory banks. For example, the row address and/or column address supplied with the read or write command may be associated with two or more paired memory array segments, and the two or more paired memory array segments may be disposed in two of more memory mats. The row address and/or column address may be decoded by decoders (e.g., row decoders 140, column decoders 145, and/or memory mat-associated decoders that are not shown) to determine the two or more memory mats, and additionally the section and/or row within each mat, where data should be read from or written do. As described herein, the paired memory array segments are disposed over different memory mats to avoid contention accessing the mats.
The power supply terminals may be supplied with power supply potentials VDD and VSS. These power supply potentials VDD and VSS can be supplied to an internal voltage generator circuit 170. The internal voltage generator circuit 170 can generate various internal potentials VPP, VOD, VARY, VPERI, and the like, based on the power supply potentials VDD and VSS. The internal potential VPP can be used in the row decoder 140, the internal potentials VOD and VARY can be used in the sense amplifiers included in the memory array 150, and the internal potential VPERI can be used in many other circuit blocks.
The power supply terminal may also be supplied with power supply potential VDDQ. The power supply potential VDDQ can be supplied to the input/output circuit 160 together with the power supply potential VSS. The power supply potential VDDQ can be the same potential as the power supply potential VSS in an embodiment of the present technology. The power supply potential VDDQ can be a different potential from the power supply potential VDD in another embodiment of the present technology. However, the dedicated power supply potential VDDQ can be used for the input/output circuit 160 so that power supply noise generated by the input/output circuit 160 does not propagate to the other circuit blocks.
The clock terminals and data clock terminals may be supplied with external clock signals and complementary external clock signals. The external clock signals CK, CKF, WCK, WCKF can be supplied to a clock input circuit 120. The CK and CKF signals can be complementary, and the WCK and WCKF signals can also be complementary. Complementary clock signals can have opposite clock levels and transition between the opposite clock levels at the same time. For example, when a clock signal is at a low clock level a complementary clock signal is at a high level, and when the clock signal is at a high clock level the complementary clock signal is at a low clock level. Moreover, when the clock signal transitions from the low clock level to the high clock level the complementary clock signal transitions from the high clock level to the low clock level, and when the clock signal transitions from the high clock level to the low clock level the complementary clock signal transitions from the low clock level to the high clock level.
Input buffers included in the clock input circuit 120 can receive the external clock signals. For example, when enabled by a clock/enable signal from the command decoder 115, an input buffer can receive the clock/enable signals. The clock input circuit 120 can receive the external clock signals to generate internal clock signals ICLK. The internal clock signals ICLK can be supplied to an internal clock circuit 130. The internal clock circuit 130 can provide various phase and frequency controlled internal clock signals based on the received internal clock signals ICLK and a clock enable (not shown in
The apparatus 100 can be connected to any one of a number of electronic devices capable of utilizing memory for the temporary or persistent storage of information, or a component thereof. For example, a host device of apparatus 100 may be a computing device, such as a desktop or portable computer, a server, a hand-held device (e.g., a mobile phone, a tablet, a digital reader, a digital media player), or some component thereof (e.g., a central processing unit, a co-processor, a dedicated memory controller, etc.). The host device may be a networking device (e.g., a switch, a router, etc.) or a recorder of digital images, audio and/or video, a vehicle, an appliance, a toy, or any one of a number of other products. In one embodiment, the host device may be connected directly to apparatus 100; although in other embodiments, the host device may be indirectly connected to a memory device (e.g., over a networked connection or through intermediary devices).
The main memory 202 includes a plurality of memory units 220, which each include a plurality of memory cells. The memory units 220 can be individual memory dies, memory planes in a single memory die, a mat of memory dies vertically connected with through-silicon vias (TSVs), or the like. For example, in one embodiment, each of the memory units 220 can be formed from a semiconductor die and arranged with other memory unit dies in a single device package. In other embodiments, multiple memory units 220 can be co-located on a single die and/or distributed across multiple device packages. The memory units 220 may, in some embodiments, also be sub-divided into memory regions 228 (e.g., banks, ranks, channels, blocks, pages, etc.). The memory units 220 may include the wrap-around memory array architecture introduced in the present technology. For example, the memory units 220 may include more than two memory mats that are horizontally aligned along a word line direction to achieve a higher storage volume in the memory device 200.
The memory cells can include, for example, floating gate, charge trap, phase change, capacitive, ferroelectric, magneto resistive, and/or other suitable storage elements configured to store data persistently or semi-persistently. The main memory 202 and/or the individual memory units 220 can also include other circuit components, such as multiplexers, decoders, buffers, read/write drivers, address registers, data out/data in registers, etc., for accessing and/or programming (e.g., writing) the memory cells and other function, such as for processing information and/or communicating with the control circuitry 206 or the host device 208. Although shown in the illustrated embodiments with a certain number of memory cells, rows, columns, regions, and memory units for purposes of illustration, the number of memory cells, rows, columns, regions, and memory units can vary, and can, in other embodiments, be larger or smaller in scale than shown in the illustrated examples. For example, in some embodiments, the memory device 200 can include only one memory unit 220. Alternatively, the memory device 200 can include two, three, four, eight, ten, or more (e.g., 16, 32, 64, or more) memory units 220. Although the memory units 220 are shown in
In one embodiment, the control circuitry 206 can be provided on the same die as the main memory 202 (e.g., including command/address/clock input circuitry, decoders, voltage and timing generators, IO circuitry, etc.). In another embodiment, the control circuitry 206 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), control circuitry on a memory die, etc.), or other suitable processor. In one embodiment, the control circuitry 206 can include a processor configured to execute instructions stored in memory to perform various processes, logic flows, and routines for controlling operation of the memory device 200, including managing the main memory 202 and handling communications between the memory device 200 and the host device 208. In some embodiments, the control circuitry 206 can include embedded memory with memory registers for storing (e.g., memory addresses, row counters, bank counters, memory pointers, fetched data, etc.) In another embodiment of the present technology, a memory device 200 may not include control circuitry, and may instead rely upon external control (e.g., provided by the host device 208, or by a processor or controller separate from the memory device 200).
The host device 208 can be any one of a number of electronic devices capable of utilizing memory for the temporary or persistent storage of information, or a component thereof. For example, the host device 208 may be a computing device, such as a desktop or portable computer, a server, a hand-held device (e.g., a mobile phone, a tablet, a digital reader, a digital media player), or some component thereof (e.g., a central processing unit, a co-processor, a dedicated memory controller, etc.). The host device 208 may be a networking device (e.g., a switch, a router, etc.) or a recorder of digital images, audio and/or video, a vehicle, an appliance, a toy, or any one of a number of other products. In one embodiment, the host device 208 may be connected directly to memory device 200, although in other embodiments, the host device 208 may be indirectly connected to memory device 200 (e.g., over a networked connection or through intermediary devices).
In operation, the control circuitry 206 can directly write or otherwise program (e.g., erase) the various memory regions of the main memory 202. The control circuitry 206 communicates with the host device 208 over a host device bus or interface 210. In some embodiments, the host device 208 and the control circuitry 206 can communicate over a dedicated memory bus (e.g., a DRAM bus). In other embodiments, the host device 208 and the control circuitry 206 can communicate over a serial interface, such as a serial attached SCSI (SAS), a serial AT attachment (SATA) interface, a peripheral component interconnect express (PCIe), or other suitable interface (e.g., a parallel interface). The host device 208 can send various requests (in the form of, e.g., a packet or stream of packets) to the control circuitry 206. A request can include a command to read, write, erase, return information, and/or to perform a particular operation (e.g., a refresh operation, a TRIM operation, a precharge operation, an activate operation, a wear-leveling operation, a garbage collection operation, etc.).
To further increase storage volume, memory arrays may grow by adding additional sections (e.g., rows). Typically the additional sections grow the memory array in the vertical (e.g., digit line) direction.
As shown, the memory array 300a may include two memory mats 310a and 320a that are aligned on a left side and a right side of the memory array 300a. Additionally, the memory array 300a can include a global column redundancy (GCR) 306a and a row address decoder (XDEC) 308a that are shared by the memory mats 310a and 320a. In this example, the memory mat 310a and memory mat 320a can be co-located (e.g., on a single die) to improve storage density and/or capacity of the memory array 300a. As shown, each one of the memory mats 310a and 320a may include a number of memory sections that are aligned along a digit line (bit line) direction. As shown, each of the memory mat 310a and memory mat 320a may include 13 memory sections. However, as described further below, some memory sections (e.g., on the edge of the memory mat 310a and memory mat 310b) may provide less (e.g., half of the) storage provided by other sections. For example, in the embodiment illustrated in
In the memory array 300a, multiple memory sections (e.g., the memory sections 0 to 11 included in the memory mats 310a and 320a) can be vertically arranged along the digit line direction using advanced semiconductor fabrication and packaging techniques to achieve a high storage density. Each one of the memory sections may be identical to each other and include a set of memory planes which contain multiple rows and columns of memory cells. Depending on the designed size of the memory device, each of the memory sections may include one or more memory planes. For example, each one of the memory sections included in memory mats 310a and 320a may include eight memory planes. These memory planes can be organized in a two-dimensional arrangement, with each memory plane interconnected.
In the example memory arrays 300a and 300b, sense amplifiers can be disposed at the boundaries between the memory sections. Each memory section connects to sense amplifiers disposed above and/or below to output data. For example, non-edge sections (e.g., sections 1-11 in the embodiment illustrated in
In memory devices, the sense amplifier can be utilized to amplify small data signal stored in the memory call and convert it into a stronger signal which can be used by other parts of the memory device or system. When a particular row or column of memory cells are selected for data reading or writing, the corresponding sense amplifiers included in the edge memory sections can be activated to amplify the signals from selected memory cells. In some other examples, the sense amplifiers may be also configured for refreshing the charge in the capacitor periodically to prevent data loss due to charge leakage.
The yield of memory array 300a may be improved by replacing malfunctional memory columns in the memory array with one or more redundant memory columns included in the GCR 306. The GCR 306a can be disposed in between the memory mats 310a and 320a to provide extra columns for data storage. The memory columns included in GCR 306a can be connected to sense amplifiers of memory sections of memory mats 310a and 320a via spare column access lines disposed therein. A memory controller (e.g., the memory control circuitry 206 described in
In this example, the row address decoder 308a can be configurated to select a specific row of memory cells in the memory array 300a. Particularly, the row address decoder 308a may receive a row address and then activate the corresponding section of memory cells disposed in the symmetrically aligned memory mats 310a and 320a (e.g., the same vertically-aligned sections are activated in both memory mats, such as activating section 5 in memory mat 310a and section 5 in memory mat 310b). When a memory location is addressed, the column address can be provided to the command decoder (e.g., the command decoder 115 described in
In the symmetrically aligned memory mats 310a and 320a of the memory array 300a, memory sections of memory mat 310a can be paired with memory sections of memory mat 320a, respectively. For example, memory sections having same ranking numbers in the memory mats 310a and 320a may share one or more word lines to activate the transistor included in each memory cells of the memory sections for data read and/or write operations. In this example, each word line of the memory array 300a may be arranged in rows, connecting memory cells of the paired memory sections in the memory mats 310a and 320a. During a memory read operation, a specific word line may be activated (e.g., firing a pair of memory sections such as sections 6 in both memory mats 310a and 320a) to turn on transistors included in the pair of memory sections (e.g., memory cells included in sections 6 in both memory mats 310a and 320a), causing the transistors to turn on and connect corresponding capacitors to the bit line. Similarly, during a memory write operation, a specific word line can be activated (e.g., firing a pair of memory sections such as sections 4 of the memory mats 310a and 320a) to apply a voltage that corresponds to the data needed to be written. As illustrated in
In conventionally-arranged memory arrays (e.g., memory arrays that do not implement the wrap-around architecture described herein), memory capacity may be increased by adding additional memory sections to the memory array. For example,
There are challenges, however, with increasing memory capacity using conventional architectures depicted in
Accordingly,
Moreover, the wrap-around memory array of the present disclosure may include a plurality of memory array segments, each one of the memory array segments having a same second number of memory sections. In some embodiments, the second number of memory sections (associated with each memory array segment) may be smaller than the first number of memory sections (associated with each memory mat). That is, in some embodiments, the capacity of a memory array segment (as a function of the number of associated memory sections) is less than the capacity of a memory mat (as a function of the number of associated memory sections). In some embodiments the memory array segments may have more or equal number of memory sections as the memory mats. Furthermore, memory array segments may be allocated across different memory mats, and/or each of the memory mats may include more than one memory array segment. For example, the memory array 400 can include four memory array segments, including a first memory array segment 422a (illustrated in
In this example, each one of the memory mats of the memory array 400 may include more than one memory array segments. For example, the first memory mat 410a on the left may include the first memory array segment 422a and a portion of the second memory array segment 422b. The second memory mat 410b in the middle may include a portion of the second memory array segment 422b and a portion of the memory array segment 422c. The third memory mat 410c on the right may include a portion of the third memory array segment 422c and the fourth memory array segment 422d. In other embodiments, not illustrated in
In this example, each of the memory mats may include a top edge memory section and a bottom edge memory section for data operations. As described above, edge sections may have less usable capacity, as compared to non-edge sections, because they couple to fewer sense amplifiers. For example, the first memory mat 410a may include a top edge memory section and a bottom edge memory section, both having half storage size in comparison to other non-edge memory sections, and marked as memory section 0 of the first memory array segment 422a. Similarly, the second memory mat 410b may include a top edge memory section and a bottom edge memory section, both having half storage size to other non-edge memory sections, and marked as memory section 3 of the second memory array segment 422b. The third memory mat 410c may include a top memory section and a bottom edge memory section, both having half storage size to other non-edge memory sections, and marked as memory section 6 of the third memory array segment 422c. In the example memory array 400, sense amplifiers can be disposed at the boundaries between the memory sections.
The memory device of the present technology may include row address decoders, each one of the row address decoders being dedicated to one of the multiple memory mats. For example, the memory array 400 may include row address decoder (XDEC) 404a, row address decoders 404b, and row address decoders 404c that are coupled with the first memory mat 410a, the second memory mat 410b, and the third memory mat 410c, respectively. Similar to the row decoders 140 described in
The memory device of the present technology may also include one or more global column redundancies (GCRs). For example, the memory array 400 shown in
In the present technology, the memory array segments included in memory array 400 may be paired and/or associated with other memory array segments when performing data read and/or data write operations. That is, and as described further herein, when reading data from or writing data to a section associated with one memory array segment, the same operation may be performed on another section associated with a paired memory array segment. For example, in response to a read request from a host, some of the data may be read from a section in one memory array segment, and the remainder of the data read from a section in the paired memory array segment. Similarly, write data from a host may be portioned and written to a section of two paired memory array segments. Furthermore, the sections of the paired memory array segments may be physically disposed in different memory mats of the memory device. In some embodiments, the memory array 400 is configured so that no two paired memory array segments are disposed in the same memory mat. That is, in some embodiments, a memory array segment and its pair are always in different memory mats. In some embodiments, the same section (identified by a section number) of two paired memory array segments may be utilized for an operation, but the sections may be disposed at different physical locations in the vertical dimension of the corresponding memory mats. For example, when activating a word line of a memory section having a specific section number in one memory array segment disposed in a first memory mat, the word line of a memory section having the same section number in the paired memory array segment disposed in a second memory mat may also be activated. In the foregoing example, the first word line and second word line may be disposed at different locations in the vertical dimension. As described herein, same-numbered sections of the paired memory array segments may be disposed in different memory mats of the memory device to prevent data contention, e.g., data signal read/write through sense amplifiers of a same memory mat, thereby facilitating performing operations on the paired segments in parallel.
For example,
The following are examples of utilizing memory sections disposed in paired memory array segment (which may also be disposed in different memory mats) to satisfy a read operation and/or write operation requested by a host. The following are merely examples illustrating the embodiment of the memory array 400 depicted in
In one example, when activating word line of one memory section of the first memory array segment 422a, the word line of a corresponding (e.g., the same-numbered) memory section of the third memory array segment 422c can be activated as well (e.g., the first memory array segment and third memory array segment are paired). For example, when a data read or write operation identifies, through the row address decoder 404a decoding an address associated with the operation, that a portion of the data is be read from or written to the memory section #6 of the memory array segment 422a, the memory section #6 of the third memory array segment 422c will be identified as well, through the row address decoder 404c decoding the address associated with the operation, as responsible for another portion of the data to be read or written. The word lines of memory section #6 of the first memory array segment 422a and the word lines of memory section #6 of the third memory array segment 422c will be activated simultaneously for the data read or write operations. Here, a portion of the data can be output from or input into the memory section #6 of the first memory array segment 422a through the first memory mat 410a on the left (i.e., through sense amplifiers included in the top and bottom boundary of memory section #6 of the first memory array segment 422a). Moreover, another portion of the data can also be output from or input into the edge memory sections #6 of the third memory array segment 422c through the third memory mat 410c on the right (e.g., through sense amplifiers disposed along the bottom boundary of the top edge section and disposed along the top boundary of the bottom edge section). In this example, the data input and/or output on the first pair of memory array segments 422a and 422c can be allocated in different memory mats to prevent delays in data transfer and performance reduction caused by the data contention. That is, and as described herein, portions of the data are able to be read from or written to the two different memory mats at the same time, therefore in parallel performing the partial operations of the memory operation.
In another example, when a data read or write operation identifies, through the row address decoder 404c, an address relating to the memory section #4 of the fourth memory array segment 422d, the memory section #4 of the second memory array segment 422b will be identified as well through the row address decoder 404b. The word lines of memory section #4 of the fourth memory array segment 422d and the word lines of memory section #4 of the second memory array segment 422b can be activated simultaneously for the memory device operations. Similar to the first paired memory array segments, data contention can be avoided in the second pair of memory array segments by allocating data input/output in different memory mats. Here, a portion of the data can be output from or input into the memory section #4 of the fourth memory array segment 422d through the third memory mat 410c on the right (i.e., through edge amplifiers disposed at the top and bottom edge memory sections #6 of the third memory array segment 422c). Moreover, another portion of the data can also be output from or input into the memory section #4 of the second memory array segment 422b through the second memory mat in the middle (i.e., through sense amplifiers disposed along the bottom boundary of the top edge section 415b and disposed along the top boundary of the bottom edge section 415a).
In the present technology, memory sections associated with different memory array segments, and disposed in at least two memory mats, can be accessed together for data read and/or write operations. As described above, in the memory array 400, a memory data read/write operation may access a pair of memory array segments that are disposed in two different memory mats. For example, the first pair of memory array segments 422a and 422c can be accessed simultaneously through the first memory mat 410a on the left and the second memory mat 410b in the middle (e.g., for row addresses relating to memory sections #0-#5 of corresponding memory array segments). The first pair of memory array segments 422a and 422c can also be accessed simultaneously through the first memory mat 410a on the left and the third memory mat 410c on the right (e.g., for row addresses relating to memory sections #6-#7 of corresponding memory array segments). In another example, the second pair of memory array segments 422b and 422d can be accessed simultaneously through the third memory mat 410c on the right and the second memory mat 410b in the middle (e.g., for row addresses relating to memory sections #3-#7 of corresponding memory array segments). The second pair of memory array segments 422b and 422d422d can also be accessed simultaneously through the third memory mat 410c on the right and the first memory mat 410a on the left (e.g., for row addresses relating to memory sections #0-#2 of corresponding memory array segments). In other words, although the memory array 400 may be configured such that certain memory array segments are paired (e.g., memory array segments 422a and 422c, and memory array segments 422b and 422d422d, the memory mats used to perform operations involving the paired memory array segments (e.g., the memory mats from which read data is provided or to which write data is written to) may change depending on which memory sections of the memory array segments are being accessed.
The above are illustrative examples of the operation of the row address decoders 404a, 404b, and 404c based on a particular distribution of numbered sections in memory array segments that are distributed across different memory mats. It will be appreciated that the row address decoders 404a, 404b, and 404c be implemented differently, and resolve input addresses to different word lines (corresponding to different section numbers) for different arrangements. For example, row address decoders 404a, 404b, and 404c can be implemented differently depending on how the memory array segments are disposed among the different memory mats. That is, in the example of
The memory array 510 may also include a plurality of row address decoders that are coupled to the plurality of memory mats. For example, row address decoders 512a, 512b, and 512c can be disposed on a same die with the memory mats and electrically connected to the memory mats 510a, 510b, and 510c, respectively. In some other examples, the plurality of row address decoders can be assembled separately from the memory array 510, e.g., being included in an electrical device electrically connected to the memory array 510. Each one of the plurality of row address decoders can be configured to receive control signals or commands, and decode row address of memory locations that need to be accessed in the corresponding memory mat. In addition, the memory array 510 may include one or more redundancy columns (not shown) that are coupled to the memory mats and configured to replace malfunctioning memory columns included in the memory array 510.
In embodiments of the disclosed technology, each one of the memory mats 510a, 510b, and 510c can include one or more memory array segments, and each memory array segment may include a same number of memory sections that are stacked along a bit line extending direction. For example, the memory array 510 may include a group of four memory array segments, similar to that of the memory array 400, respectively in its memory mats 510a, 510b, and 510c. The memory array segments included in the memory array 510 can be paired and disposed in different memory mats, to ensure that data read and/or write operations can go through different memory mats to prevent data contention.
As shown in
In some embodiments the mat decoder 520 decodes additional information based on a received address, instead of or in addition to an indication of the memory mats to be accessed. For example, the mat decoder 520 may determine in which memory array segments a received address is in. Further, the mat decoder 520 may determine the section number, within the determined memory array segments, associated with the received address. Each of the row address decoders 512a, 512b, 512c may receive that information from the mat decoder 520 (e.g., the memory array segments and memory section), based on which they decode the physical location within the corresponding memory mat that is associated with either of the memory array segments and memory section.
The peripheral circuits included in the memory system 500 may also include a plurality of data sense amplifier. For example, data sense amplifiers 514a, 514b, and 514c can be respectively coupled to the memory mats 510a, 510b, and 510c, as shown in
The peripheral circuits included in the memory system 500 may further include a plurality of multiplexers for data read and/or write operations. For example, a first multiplexer 516a and a second multiplexer 516b can be included in the memory system 500 as shown in
In the memory system 500, data transferred through the data bus 518a can be read from or written to the memory cells in the memory mats 510a and 510b. Additionally, data transferred through the data bus 518b can be read from or written to the memory cells included in the memory mats 510b and 510c. As shown in
In addition to the multiplexers 516a and 516b illustrated in
In a data read operation, the memory system 500 may receive a read command from a host and/or memory controller. The data address can be decoded by the mat decoder 520. As described above, decoded address information (e.g., the data address itself, and/or memory array segment and memory section information from the mat decoder 520) can be further transmitted to the row address decoders 512a, 512b, and 512c. Each one of the row decoders can decode information provided by the mat decoder 520 and select the row of memory cells that are read. In some embodiments, each one of the row address decoders may further determine whether the row address is related to memory array segments included in the memory mat that the row address decoder is coupled to. If the decoded row address matches to memory cells or memory sections of segment memory array included therein, the row address decoder can further activate corresponding word line for voltage signal output.
For example, in one data read operation, a memory read command and address from a host can be sent to the memory system 500. The mat decoder 520 may decode the address and determine that the read data is stored in a memory section #6 of the first memory array segment and the memory section #6 of a third memory array segment of the memory array 510. Here, the first and third memory array segment can be disposed in different memory mats, e.g., the first memory array segment being disposed in memory mat 510a and the third memory array segment being disposed in the memory mats 510b and 510c. In some embodiments, the mat decoder 520 may determine the memory mats in which the identified memory array segments and memory section numbers are disposed.
Once the memory mats are identified during the data read operation, the data sense amplifiers and multiplexers can be enabled correspondingly. For example, once the mat decoder 520 and/or row address decoders identify corresponding memory mats (e.g., the memory mat 510a and the memory mat 510c for the data read operation), the data sense amplifier 514a and data sense amplifier 514c will be enabled to process and transmit output data from the memory mat 510a and the memory mat 510c for the data read operation, respectively. In addition, both of the first multiplexer 516a and the second multiplexer 516b can be configured to route data signals from the activated data sense amplifiers and out of the memory system 500 (e.g., to DQ).
In some embodiments, some of those row decoders will decode even though the corresponding memory mat doesn't contain that memory sections indicated by the mat decoder 520. In this case, the data sense amplifier coupled to that memory mat (e.g., data sense amplifier 514b connected to memory mat 510b) will not be enabled, and the multiplexers 516a and 516b will select data from other memory mats (e.g., the memory mats 510a and 510c). In other words, the decode performed by certain of the row address decoders 512a, 512b, and 512c may be a don't care when the corresponding memory mat output will not be selected.
In a data write operation, the memory system 500 may receive a data write command, write data, and data address from the memory controller. The input data address can be decoded by the mat decoder 520 and transmitted to the row address decoders 512a, 512b, and 512c. In some embodiments, based on input data address information contained in the data input command (e.g., MSB+1 bit or MSB+2 bit data on data address command), the mat decoder 520 can identify the multiple memory mats to which the write data is to be written (e.g., a portion of the write data in a first memory mat, another portion of the write data in a second memory mat). In some embodiments, each one of the row address decoders identify whether its coupled memory mat matches to the decoded data address information and proceed accordingly. The data sense amplifiers and demultiplexers included in the memory system 500 can also be configured based on the input data address and/or decoded information (e.g., from mat decoder 520 and/or row address decoders 512a, 512b, and 512c).
For example, in one data write operation, a memory write command can be sent to the memory system 500, indicating to write data to memory cells of memory sections #4 of a fourth memory array segment and a second memory array segment. The second and fourth memory array segments are paired in this example for data read and write operations. Further, the memory section #4 of the fourth memory array segment can be disposed in the memory mat 510c and the memory section #4 of the second memory array segment can be disposed in the memory mat 510b. Here, the data input command may contain the memory mat selection information (e.g., MSB+1 bit or MSB+2 bit of input data address) which can be decoded by the row address decoders 512a, 512b, and 512c. In this example, the row address decoder 512b and 512c will select memory mat 510b and 510c for the data write operation and unselect the memory mat 510a.
In some embodiments, the mat decoder 520 can be configured to determine, from the received address, two or more memory array segments (e.g., the first memory array segment and the third memory array segment) that the data is stored in, and further the memory section within the segments that the data is stored in. The memory array segment information and memory section information can be then transmitted to corresponding row decoders (e.g., the row address decoder 512a corresponding to the first memory array segment, and the row address decoder 512c corresponding to the third memory array segment). Based on the transmitted information, the row decoders select memory rows and word lines for the corresponding memory mat. In this example, input date can be transmitted from the multiplexers 516a and 516b to corresponding memory sections in memory mats 510a and 510c, respectively.
In some embodiments, some of those row decoders will decode even though the corresponding memory mat doesn't contain that memory sections indicated by the mat decoder 520. In this case, the data sense amplifier coupled to that memory mat (e.g., data sense amplifier 514b connected to memory mat 510b) will not be enabled, and the multiplexers 516a and 516b will transmit data to other memory mats (e.g., the memory mats 510a and 510c).
In the present technology, the mat decoder 520 can be configured to decode, based on received command and address, the pair of memory array segments for the data input/output operations. Further, the mat decoder 520 can determine the memory sections of above noted memory array segments. In accordance with the decoded memory array segments and memory sections information, the mat decoder 520 can further control (e.g., enable or disable) the data sense amplifiers, multiplexers, and demultiplexers connected to the memory array 510. Individual row decoders coupled to each one of the memory mats can decode address transmitted from the mat decoder 520 and fire corresponding word lines for data read or write operations.
In addition, the method 600 may include decoding, by the mat decoder, the received data read command to identify two or more memory array segments and memory sections within the identified memory array segments that data is stored in, at step 604. For example, once received the memory read command, the mat decoder can decode the command and embedded address information. Moreover, the mat decoder can identify the memory array segments information and memory sections included in the identified memory array segments, based on the received memory read command.
Further, the method 600 may include transmitting the decoded memory array segment information and memory section information to each one of a plurality of row address decoders, the plurality of row address decoders being coupled to a plurality of memory mats included in the semiconductor device, respectively, at step 606. For example, the mat decoder can transmit the decoded memory array segment information and memory sections information to multiple row decoders, each of which is associated with a memory mat.
The method 600 may also include enabling, by the mat decoder and based on the decoded memory array segment information and memory section information, one or more data sense amplifiers (DSA) of a plurality of DSAs that are connected to the plurality of memory mats, respectively, at step 608. For example, based on the decoded memory array segment and memory sections information, the mat decoder can enable one or more of multiple DSAs, each of which is associated with a memory mat, for the memory read operation.
Moreover, the method 600 may include controlling, by the mat decoder and based on the decoded memory array segment information and memory section information, one or more multiplexers that are connected to corresponding DSA, at step 610. For example, based on the decoded memory array segment and memory sections information, the mat can also control each of multiple multiplexers that receive the DSA output as inputs, and that couple to an output bus (e.g., DQ or a portion thereof), for the memory read operation.
The method 600 may also include activating WLs of identified memory sections of the memory array segments, at step 612. For example, WLs connected to the identified memory sections of corresponding memory array segments can be activated.
Lastly, the method 600 may include outputting data signals from identified memory sections to one or more data buses, the output data signals transmitting through enabled DSAs and one or more enabled multiplexers, at step 614. For example, the activated WLs can transfer data stored in identified memory sections out of the memory array. The memory array data output will also go through the enabled DSAs and multiplexers.
In addition, the method 700 may include decoding, by the mat decoder, the received data write command to identify two or more memory array segments and memory sections within the identified memory array segments identified, at 704. For example, once received the memory write command, the mat decoder can decode the command and embedded address information. Moreover, the mat decoder can identify the memory array segments information and memory sections included in the identified memory array segments, based on the received memory write command.
Further, the method 700 may include transmitting the decoded memory array segment information and memory section information to each one of the plurality of row address decoders, at 706. For example, the mat decoder 520 can transmit the decoded memory array segment information and memory sections information to multiple row decoders, each of which is associated with a memory mat.
The method 700 may also include controlling, by the mat decoder and based on the decoded memory array segment information and memory section information, one or more demultiplexers, at step 708. For example, based on the decoded memory array segment and memory sections information, the mat decoder can also control each of multiple multiplexers that receive write data (e.g., from a DQ or portion thereof) as inputs and that couple to the multiple memory mats, for the memory write operation.
In addition, the method 700 may include activating WLs of identified memory sections of the memory array segments, at step 710. For example, WLs connected to the identified memory sections of corresponding memory array segments can be activated.
Lastly, the method 700 may include inputting data from one or more data buses to the identified memory sections, the input data transmitting through the enabled DSAs and one or more controlled demultiplexers, at step 712. The data input will go through the enabled DSAs and selected MUX.
Any one of the semiconductor structures described above with reference to
Specific details of several embodiments of semiconductor devices, and associated systems and methods, are described below. A person skilled in the relevant art will recognize that suitable stages of the methods described herein can be performed at the wafer level or at the die level. Therefore, depending upon the context in which it is used, the term “substrate” can refer to a wafer-level substrate or to a singulated, die-level substrate. Furthermore, unless the context indicates otherwise, structures disclosed herein can be formed using conventional semiconductor-manufacturing techniques. Materials can be deposited, for example, using chemical vapor deposition, physical vapor deposition, atomic layer deposition, plating, electroless plating, spin coating, and/or other suitable techniques. Similarly, materials can be removed, for example, using plasma etching, wet etching, chemical-mechanical planarization, or other suitable techniques.
In accordance with one aspect of the present disclosure, the semiconductor devices illustrated above could be memory dice, such as dynamic random access memory (DRAM) dice, NOT-AND (NAND) memory dice, NOT-OR (NOR) memory dice, magnetic random access memory (MRAM) dice, phase change memory (PCM) dice, ferroelectric random access memory (FeRAM) dice, static random access memory (SRAM) dice, or the like. In an embodiment in which multiple dice are provided in a single assembly, the semiconductor devices could be memory dice of a same kind (e.g., both NAND, both DRAM, etc.) or memory dice of different kinds (e.g., one DRAM and one NAND, etc.). In accordance with another aspect of the present disclosure, the semiconductor dice of the assemblies illustrated and described above could be logic dice (e.g., controller dice, processor dice, etc.), or a mix of logic and memory dice (e.g., a memory controller die and a memory die controlled thereby).
The devices discussed herein, including a memory device, may be formed on a semiconductor substrate or die, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.
The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. Other examples and implementations are within the scope of the disclosure and appended claims. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”
As used herein, the terms “top,” “bottom,” “over,” “under,” “above,” and “below” can refer to relative directions or positions of features in the semiconductor devices in view of the orientation shown in the Figures. These terms, however, should be construed broadly to include semiconductor devices having other orientations, such as inverted or inclined orientations where top/bottom, over/under, above/below, up/down, and left/right can be interchanged depending on the orientation.
It should be noted that the methods described above describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, embodiments from two or more of the methods may be combined.
From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Rather, in the foregoing description, numerous specific details are discussed to provide a thorough and enabling description for embodiments of the present technology. One skilled in the relevant art, however, will recognize that the disclosure can be practiced without one or more of the specific details. In other instances, well-known structures or operations often associated with memory systems and devices are not shown, or are not described in detail, to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems, and methods in addition to those specific embodiments disclosed herein may be within the scope of the present technology.
The present application claims priority to U.S. Provisional Patent Application No. 63/600,558, filed Nov. 17, 2023, the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63600558 | Nov 2023 | US |