The present invention relates to a semiconductor device and specifically relates to a technology of a semiconductor device including a non-volatile memory device.
Recently, a phase-change memory using a chalcogenide material as a recording material has been researched actively as a non-volatile memory device. The phase-change memory is a kind of a resistive random access memory that stores information by using different resistive states of a recording material between electrodes.
In the phase-change memory, information is stored by utilization of a resistance value of a phase change material, such as Ge2Sb2Te5, being different in an amorphous state and a crystalline state. Resistance is high in the amorphous state (high resistive state) and resistance is low in the crystalline state (low resistive state). Thus, reading of information from the phase-change memory is realized by application of a potential difference to both ends of an element, measurement of a current flowing in the element, and determination of the high resistive state/low resistive state of the element.
In the phase-change memory, electric resistance of a phase-change film including a phase-change material is changed into a different state by Joule heat generated by current, whereby data is rewritten.
When a resistive element structure is made small, current necessary for a change in a state of a phase-change film is decreased in this phase-change memory. Thus, phase-change memory is suitable for downsizing in principle and is researched actively. In PTL 1 and PTL 2, a non-volatile memory having a three-dimensional structure is disclosed.
In PTL 1, a configuration in which memory cells each of which includes a variable resistive element and a transistor connected thereto in parallel are connected in series in a lamination direction is disclosed. Also, in PTL 2, a configuration in which memory cells each of which includes a variable resistive element and a diode connected thereto in series are connected, with a leading line therebetween, in series in a lamination direction is disclosed. In this configuration, for example, by application of a potential difference to a leading line between two memory cells and two leading lines on an outer side of the two memory cells, a batch writing operation is performed with respect to the two memory cells.
Also, in PTL 3, it is disclosed to read data and to verify whether writing is successful when the data is written into a phase-change memory. When the read data is different from the write data, the data is written again. A writing method of repeating this operation until writing is performed successfully is disclosed in PTL 3.
PTL 1: WO2011/074545A
PTL 2: Japanese Patent Application Laid-Open No. 2011-142186
PTL 3: Japanese Patent Application Laid-Open No. 2008-084518
Before submission of the present application, inventors performed verification of a control method of a non-volatile resistive random access memory. As illustrated in
This means that the reset operation can be performed at high speed but the setting operation is performed at low speed compared thereto in the phase-change memory. Also, there is a possibility that Joule heat generated in a case of performing a writing operation on a memory cell influences a crystalline state of a memory cell in periphery thereof, a resistance value of the peripheral memory cell varies, and data disappears. Specifically, in a setting operation on a memory cell, that is, an operation of changing a state into the low resistive crystalline state, current enough for keeping a phase change material at a crystallization temperature is applied for a long period. Thus, there may be a large influence on a crystalline state of a memory cell in a periphery.
The present invention has been provided in view of the forgoing. A first purpose of the present invention is to provide a semiconductor device that can increase speed of making a memory cell into a set state in unit time (increase speed of data erasing rate). A second purpose of the present invention is to provide a semiconductor device that can control a decrease in reliability due to heat disturbance in a setting operation, that is, to provide a semiconductor device including a highly reliable non-volatile memory.
The above purposes, the other purposes, and a new characteristic of the present invention will become obvious in a description and attached drawings of the present description.
Representative embodiments of the invention disclosed in the present application are described briefly as follows.
That is, a semiconductor device includes a non-volatile memory unit including a plurality of memory cells, and a control circuit configured to assign a physical address to a logical address input from the outside and to access the non-volatile memory unit according to the assigned physical address. Here, the non-volatile memory unit includes a plurality of first signal lines, a plurality of second signal lines that intersect with the plurality of first signal lines, and a plurality of memory cell groups arranged in intersection points of the plurality of first signal lines and the plurality of second signal lines. Moreover, each of the memory cell groups includes first to Nth (N is integer number equal to or larger than 2) memory cells and memory-cell selection lines that respectively select the first to Nth memory cells. The control circuit divides the plurality of memory cell groups included in the non-volatile memory unit into a first area including a plurality of memory cell groups arranged in a manner adjacent to each other and a second area arranged in a manner adjacent to one side of an outer periphery of the first area. The control circuit simultaneously writes a first logical level into each of the plurality of memory cell groups included in the first area but does not write the first logical level into the memory cell groups included in the second area.
In one embodiment, the first logical level is a set state of a memory cell.
Accordingly, since it is possible to simultaneously perform a setting operation (erasing operation) with respect to adjacent memory cell groups, it becomes possible to improve a throughput of the setting operation, that is, a data erasing rate. Also, in a case of performing a batch setting operation, the second area can function as a heat-shielding area and prevent an influence of heat disturbance on a different memory cell group and disappearance of data in the different memory cell group.
An effect acquired by representative embodiments of the invention disclosed in the present application is described briefly as follows.
That is, it is possible to provide a semiconductor device including a highly reliable non-volatile memory.
In the following embodiments, each of the embodiments will be divided into a plurality of sections or embodiments in a description when necessary for convenience. Except for a case with specification, these are related to each other. One is a modification example, an application example, a detail description, a supplemental description, or the like of a part or a whole the other. Also, in the following embodiments, in a case of referring to the number of elements (including number, value, amount, range, and the like), the specific number is not a limitation and the number may be the specific number or more/less except for a case where there is a specification and a case where the specific number is obviously a limitation in principle.
Moreover, in the following embodiments, a configuration element (including element step and the like) is not necessarily included except for a case where there is a specification and a case where the element is obviously necessary in principle. Similarly, in the following embodiments, in a case of referring to a shape, a positional relationship, and the like of a configuration element or the like, what is substantially approximate or similar to the shape and the like is included except for a case where there is a specification or a case where it is obviously not so in principle. Similarly, this can be applied to the above number and the like (including number, value, amount, range, and the like).
In the following, embodiments of the present invention will be described in detail with reference to the drawings. Note that in all of the drawings for describing the embodiments, the same or related sign is assigned to members with the same function and a repetitious description thereof is omitted. Also, in the following embodiments, a description of the same or similar parts is not repeated in principle except for a case where the description is necessary.
Although it is not specifically limited, a circuit element included in each block in the embodiment is formed on one semiconductor substrate such as single-crystal silicon by an integrated-circuit technology such as a known complementary MOS transistor (CMOS). Also, as a memory cell described in the embodiments, a resistive storage element such as a phase-change memory or a resistive random access memory (ReRAM) is used.
As a signal system that connects the information processing device CPU_CP and the memory module (semiconductor device) NVMMD0, there are a serial interface signal system, a parallel interface signal system, an optical interface signal system, and the like. Obviously, all systems can be used. As a clock system that operates the information processing device CPU_CP and the memory module NVMMD0, there are a common clock system and a source synchronous clock system using a reference clock signal REF_CLK, an embedded clock system in which clock information is embedded to a data signal, and the like. Obviously, all clock systems can be used. In the present embodiment, it is assumed that the serial interface signal system and the embedded clock system are used as an example and an operation will be described in the following.
A reading request (RQ) or a writing request (WQ) into which clock information is embedded and which is converted into serial data is input into the memory module NVMMD0 by the information processing device CPU_CP through the interface signal HDH_IF. The reading request (RQ) includes a logical address (LAD), a data-reading instruction (RD), a sector count (SEC), and the like. The writing request (WQ) includes a logical address (LAD), a data writing instruction (WRT), a sector count (SEC), write data (WDATA), and the like.
The memory module (semiconductor device) NVMMD0 includes non-volatile memory devices NVM10 to NVM17, a random access memory RAM, a control circuit MDLCT0 that controls these non-volatile memory devices and the random access memory. The non-volatile memory devices NVM10 to NVM17 include, for example, the same configuration and performance. Each of the non-volatile memory devices NVM10 to NVM17 stores data, an OS, an application program, and SSD configuration information (SDCFG). A boot program or the like of the information processing device CPU_CP is further included. Although it is not specifically limited, the random access memory RAM is, for example, a DRAM.
Immediately after power activation, the memory module NVMMD0 performs an operation of initializing the non-volatile memory devices NVM10 to NVM17, the random access memory RAM, and the control circuit MDLCT0 in an inner part thereof (that is, power on reset). Moreover, the memory module NVMMD0 performs initialization of the non-volatile memory devices NVM10 to NVM17, the random access memory RAM, and the control circuit MDLCT0 in the inner part thereof when a reset signal RSTSIG is received from the information processing device CPU_CP.
The buffers BUF0 to BUF3 temporality store write data or read data in the non-volatile memory devices NVM10 to NVM17. The address buffer ADDBUF temporarily stores an address LAD that is input into the control circuit MDLCT0 by the information processing device (processor) CPU_CP.
A detail of the write physical address table NXPTBL will be described later with reference to
Each of the memory banks BK0 to BK3 includes a memory array ARYx (x=0 to m), a reading/writing control block SWBx (x=0 to m) provided in a manner corresponding to each memory array, and various peripheral circuits to control these. The various peripheral circuits include a row address latch RADLT, a column address latch CADLT, a row decoder ROWDEC, a column decoder COLDEC, a chain selection address latch CHLT, a chain decoder CHDEC, a data selection circuit DSW1, and data buffers DBUF0 and DBUF1.
Each memory array ARYx (x=0 to m) includes a plurality of chain memory arrays CY arranged in intersection points of a plurality of word lines WL0 to WLk and a plurality of bit lines BL0_x to BLi_x, and a bit line selection circuit BSWx that selects one of the plurality of bit lines BL0_x to BLi_x and connects the selected line to a data line DTx. Each reading/writing control block SWBx (x=0 to m) includes a sense amplifier SAx and a writing driver WDRx connected to the data line DTx, and a write data verification circuit WVx that performs verification of data by using these during a writing operation.
As illustrated in
In an example of
Next, an operation of the non-volatile memory device in
The row decoder ROWDEC receives an output from the row address latch RADLT and selects one of the word lines WL0 to WLk. The column decoder COLDEC receives an output from the column address latch CADLT and selects one of the bit lines BL0 to BLi. Also, the chain decoder CHDEC receives an output from the chain selection address latch CHLT and selects one of the chain control lines CH. When a reading instruction is input by the control signal CTL, data is read through bit line selection circuits BSW0 to BSWm from the chain memory array CY selected by a combination of the word line, the bit line, and the chain control line. The read data is amplified by sense amplifiers SA0 to SAm and is transmitted to the data buffer DBUF0 (or DBUF1) through the data selection circuit DSW1. Then, the data on the buffer DBUF0 (or DBUF1) is serially transmitted to the input/output signal IO through the data control circuit DATCTL and the IO buffer IOBUF.
On the other hand, when a writing instruction is input by the control signal CTL, a data signal is transmitted to the input/output signal IO after the address signal. The data signal is input into the data buffer DBUF0 (or DBUF1) through the data control circuit DATCTL. The data signal on the data buffer DBUF0 (or DBUF1) is written into the chain memory array CY selected by a combination of the word line, bit line, and the chain control line through the data selection circuit DSW1, the writing drivers WDR0 to WDRm, and the bit line selection circuits BSW0 to BSWm. Here, the write data verification circuits WVO to WVm arbitrarily read the written data through the sense amplifiers SA0 to SAm, verifies whether a write level reaches an adequate level, and repeatedly perform the writing operation with the writing drivers WDR0 to WDRm until the write level reaches the adequate level.
Then, when the word line WL0 becomes High and the bit line BL0 becomes Low, current I0 flows from the word line WL0 to the bit line BL0 through the diode D0, the variable resistive-type storage element R0, the memory-cell selection transistors Tcl1 to Tcln, and the chain selection transistor Tch1. When the current I0 is controlled to a shape of a Reset current pulse illustrated in
Note that in a case of reading the data recorded in the variable resistive-type storage element R0, current is applied in a degree, in which a resistance value of the variable resistive-type storage element R0 is not varied, in a path similar to that of data writing. In this case, a voltage value corresponding to the resistance value of the variable resistive-type storage element R0 is detected by the sense amplifier (SA0 in
Each of
Next, with reference to
Next, with reference to
As described above, it is possible to simultaneously make memory cells in a plurality of chain memory arrays low resistive when necessary and to improve a data erasing rate.
Here, an operation system of a chain memory array which system is one of major characteristics of the present embodiment will be described.
That is, for example, a writing operation of (n+1) bits which operation is performed with respect to the (n+1)-bit phase-change memory cell included in the chain memory array according to one writing instruction from the side of the host (CPU_CP in
Here, a case where the writing operation is performed, with the memory-cell selection line LY0 as an object, with respect to the chain memory arrays CYk000 and CYk010 will be described as an example. It is assumed that the same physical address [1] is assigned to the chain memory arrays CYk000 and CYk010 in
First, a writing instruction [1] an object of which is the physical address [1] is input. When the instruction is input, first, “1” (set state) is once written (initial writing/block erasure) into all phase-change memory cells in each of the chain memory arrays CYk000, CYk001, CYk010, and CYk011 in the writing operation illustrated in
Then, predetermined data associated with the writing instruction [1] is written into all phase-change memory cells in each of the chain memory arrays CYk000, CYk001, CYk010, and CYk011.
In this example, as the data associated with the writing instruction [1], bit data is “0 . . . 00” with respect to (n+1) bits for the chain memory array CYk000. Also, written bit data is “0 . . . 10” with respect to (n+1) bits for the chain memory array CYk010. Here, data of all phase-change memory cells in each of the chain memory arrays CYk000 and CYk010 is previously set to “1” in the initial writing (erasure). Thus, in a phase-change memory cell corresponding to a bit of data associated with the writing instruction [1] being “1” (here, phase-change memory cell corresponding to LY1 in CYk010), the writing operation is not specifically performed and “0” (reset state) is written into the other phase-change memory cells. More specifically, for example, while a deactivated memory-cell selection line is serially shifted from LY0 to LY1 . . . and to LYn, it is selected, at each time, whether to apply a Reset current pulse in
Then, when a writing instruction [2] an object of which is the physical address [1] is input again, the initial writing (erasure) is performed first similarly to the case of the writing instruction [1]. Then, “0” (reset state) is arbitrarily written based on each piece of (n+1)-bit data for the chain memory arrays CYk000 and CYk010 which data is associated with the writing instruction [2]. Note that here, “0” (reset state) is written while a deactivated memory-cell selection line is serially shifted. However, in some cases, it is possible to perform writing simultaneously without shifting the memory-cell selection line. That is, for example, the Reset current pulse may be applied between the word line WLk and the bit line BL0_0 in a state in which all of the memory-cell selection lines LY0 to LYn are deactivated and the Reset current pulse may be applied between the word line WLk and the bit line BL0_1 in a state in which the memory-cell selection lines LY0 to LYn except for the memory-cell selection line LY1 are deactivated.
By utilization of the operation system of the memory array which system is described above with reference to
(1) It is possible to make memory cells in a plurality of chain memory arrays low resistive simultaneously and to improve a data erasing rate.
(2) Writing speed is increased since only data “0” is written into a memory cell after erasure in a chain memory array.
(3) A stable writing operation can be realized since a system of writing, after one of a set state and a reset state is simultaneously written into all memory cells in a chain memory array once (after erasure), the other state into a specific memory cell is used. That is, it is possible to keep states (resistance value) of memory cells in the chain memory array in a substantially uniform manner by writing one state simultaneously. When the other state is subsequently written into a specific memory cell, each memory cell arranged in a periphery of the specific memory cell receives a similar influence in a similar initial state due to heat generated by the writing. As a result, it is possible to decrease a variation amount among resistance values of memory cells in the chain memory array. Accordingly, it becomes possible to realize a stable writing operation.
Specifically, the chain memory array illustrated in
Also, here, the set state is used in the initial writing (erasure) and the reset state is used in the subsequent writing into a specific memory cell. Accordingly, a more stable writing operation can be realized. For example, in a phase-change memory cell, the set state is usually more stable than the reset state. Also, as illustrated in
First, the initial sequence illustrated in
In the reset period of T2 (RST), an internal state of each of the information processing device CPU_CP, the control circuit MDLCT0, the non-volatile memory devices NVM10 to NVM17, and the random access memory RAM is initialized. Here, the control circuit MDLCT0 initializes an address map range (ADMAP) and various tables stored in the random access memory RAM. The various tables include an address conversion table (LPTBL), physical segment tables (PSEGTBL1 and PSEGTBL2), a physical address table (PADTBL), and a write physical address table (NXPADTBL).
Note that details of the address map range (ADMAP) and the various tables will be described later but brief descriptions thereof are as follows. The address map range (ADMAP) indicates a division of an address area used in the first operation mode and an address area used in the second operation mode. The address conversion table (LPTBL) indicates a correspondence relationship between a current logical address and physical address. The physical segment tables (PSEGTBL1 and PSEGTBL2) manage the number of times of erasure in each physical address in a segment unit and are used in wear leveling and the like. The physical address table (PADTBL) manages a state of current each physical address in detail. The write physical address table (NXPADTBL) is a table in which a physical address that is to be subsequently assigned to a logical address is determined based on wear leveling. Here, a part or a whole of the information in the write physical address table (NXPADTBL) is copied to the write physical address tables NXPTBL1 and NXPTBL2 illustrated in
In a period of T3 (MAP) after the period of T2 is over, the control circuit MDLCT0 reads the SSD configuration information (SDCFG) stored in the non-volatile memories NVM10 to 17 and transfers the read information to the map register MAPREG in
Moreover, two logical address areas (LRNG1 and LRNG2) are set in the SSD configuration information (SDCFG) in the map register MAPREG and the control circuit MDLCT0 constructs a write physical address table (NXPADTBL) corresponding thereto. More specifically, for example, the write physical address table (NXPADTBL) is divided into a write physical address table (NXPADTBL1) for the logical address area (LRNG1) and a write physical address table (NXPADTBL2) for the logical address area (LRNG2). For example, the logical address area (LRNG1) corresponds to the area for the first operation mode and the logical address area (LRNG2) corresponds to the area for the second operation mode.
Although it is not specifically limited, N/2 entries from the zeroth entry to the (N/2−1)th entry can be set as the write physical address table NXPADTBL1 when the write physical address table (NXPADTBL) includes N entries from the zeroth entry to the (N−1)th entry. Then, the remaining N/2 entries from N/2th entry to the Nth entry can be set as the write physical address table (NXPADTBL2).
In a period of T4 (SetUp) after the period of T3 is over, the information processing device CPU_CP reads a boot program stored in the non-volatile memory device NVM0 in the memory module NVMMD0 and sets up the information processing device CPU_CP. In and after a period of T5 (Idle) after the period of T4 is over, the memory module NVMMD0 becomes an idle state and waits for a request from the information processing device CPU_CP.
Next, the initial sequence illustrated in
In such an initial sequence, when the SSD configuration information (SDCFG) is previously stored in the memory module NVMMD0 (non-volatile memory device NVM10 to 17) as illustrated in
The number of times of erasure PERC indicates the number of times the initial writing (erasure) is performed. Here, for example, when a physical address PAD in which a value of the validity flag PVLD is 0 and the number of times of the initial writing (erasure) is small is preferentially assigned to a logical address, it is possible to perform leveling (wear leveling) of values of the number of times of erasure PERC. Also, in the example in
Also, when the layer mode number LYM is “0,” it is indicated that writing is performed on all phase-change memory cells CL0 to CLn in the chain memory array CY (that is, it is indicated that mode is second operation mode). Also, when the layer mode number LYM is “1,” it is indicated that writing is performed on one phase-change memory cell in the chain memory array CY (that is, it is indicated that mode is first operation mode).
Also, a value x of a layer number LYC corresponds to a memory-cell selection line LYx in the chain memory array CY illustrated in
Each of
First,
Next,
Each of
Here, the write physical address table NXPADTBL has a configuration that can register a plurality (N) of physical addresses. Here, the write physical address tables NXPADTBL (NXPADTBL1 and NXPADTBL2) determine a physical address to be an actual object of writing. A period after a logical address is received and until a physical address is determined by utilization of the table influences writing speed. Thus, information in the write physical address tables NXPADTBL (NXPADTBL1 and NXPADTBL2) is held in the write physical address tables NXPTBL1 and NXPTBL2 in the control circuit MDLCT0 in
The write physical address table NXPADTBL includes an entry number ENUM, a write physical address NXPAD, and a validity flag NXPVLD, the number of times of erasure NXPERC, a layer mode number NXLYM, and a write layer number NXLYC corresponding to the write physical address NXPAD. When two logical address areas (LRNG1 and LRNG2) are determined in the SSD configuration information (SDCFG), the control circuit MDLCT0 in
The entry number ENUM indicates N values (zeroth to (N−1)th) in a plurality of (N) pairs of write physical addresses NXPAD. The N values indicate writing priority (the number of registrations). What has a small N value in the write physical address table NXPADTBL1 is preferentially used in ascending order in response to a writing request to the logical address area (LRNG1). What has a small N value in the write physical address table NXPADTBL2 is used preferentially in ascending order in response to a writing request to the logical address area (LRNG2). Also, in a case where a value of the validity flag NXPVLD is 0, it is indicated that a physical address to be an object is invalid. In a case where the value is 1, it is indicated that a physical address to be an object is valid. For example, when a zeroth entry number ENUM is used, a value of a zeroth validity flag NXPVLD becomes 1. Thus, it is possible to determine that the zeroth entry number ENUM is used and the first is to be used in next reference to the table.
Here, with reference to
Also, a physical address area (PRNG1) is set according to the logical address area (LRNG1) and serial write physical addresses NXPAD from an address “00000000” to an address “0000000F” in the physical address area (PRNG1) are respectively registered to entry numbers ENUM=0 to ((32/2)−1). Also, the layer mode number NXLYM is set to “1” and the write layer number NXLYC is set to “0.” Similarly to the layer mode number LYM and the layer number LYC described with reference to
Then, in the state illustrated in
Moreover, a case where a writing request (WQ) with a sector count (SEC) value being 1 (512 byte) is input into the logical address area (LRNG2) of the memory module NVMMD0 for (N/2) times by the information processing device CPU_CP through the interface signal HDH_IF is considered. In this case, data included in each writing request (WQ) is written into places corresponding to serial addresses from an address “02800000” to an address “0280000F” in the physical address PAD (NXPAD) in the non-volatile memory device based on
Also, a different operation example is as follows. A case where a writing request (WQ) with a sector count (SEC) value being 16 (8 KB) is input into the logical address area (LRNG1) of the memory module NVMMD0 once by the information processing device CPU_CP through the interface signal HDH_IF is considered. In this case, data included in this writing request (WQ) is decomposed into 16 physical addresses PAD having 512 bytes each and is written into serial addresses from an address “00000000” to an “0000000F” in the physical address PAD in the non-volatile memory device.
Also, a case where a writing request (WQ) with a sector count (SEC) value being 16 (8 KB) is input into the logical address area (LRNG2) of the memory module NVMMD0 once by the information processing device CPU_CP through the interface signal HDH_IF is considered. In this case, data included in this writing request (WQ) is decomposed into 16 physical addresses PAD having 512 bytes each and is written into serial addresses from an address “02800000” to an address “0280000F” in the physical address PAD in the non-volatile memory device.
Along with progress in such a writing operation, the write physical address table NXPADTBL is arbitrarily updated. As a result, as illustrated in
The address conversion table LPTBL illustrated in
Also, in each drawing, CHNCELL indicates the number of memory cells, into which data is to be written, in all phase-change memory cells CL0 to CLn in the chain memory array CY illustrated in
Also, in each drawing, in a case where NVMMODE is “0,” it is indicated that it is possible to perform a writing operation while making the minimum erasure data size and the minimum program data size identical when data is written into the non-volatile memory device NVM. In a case where NVMMODE is “1,” it is indicated that a writing operation can be performed on the assumption that the minimum erasure data size and the minimum program data size are different from each other. In each drawing, ERSSIZE indicates the minimum erasure data size [byte] and PRGSIZE indicates the minimum program data size [byte]. In this embodiment, each of the minimum erasure data size and the minimum program data size is expressed in a byte unit.
As illustrated in
Also, as indicated by LRNG2 in
In such a manner, with the SSD configuration information, it is possible to change a specification of a used non-volatile memory device and to flexibly correspond to various specifications. Moreover, since it is possible to reduce the number of dummy chain memory arrays DCY (described later) arranged in X and Y directions of an erasure area by increasing the block erasure size, it is possible to realize large capacity.
In
A value in the middle (middle in the drawing) of the dummy chain memory array designation information XYDMC indicates the number of dummy chain memory arrays DCY arranged in the X direction of the write area. Also, a value on a right side (right side in the drawing) of the dummy chain memory array designation information XYDMC indicates the number of dummy chain memory arrays DCY arranged in the Y direction of the write area.
An example of the dummy chain memory array designation information XYDMC is as follows. That is, when the dummy chain memory array designation information XYDMC is “1_1_1,” it is indicated that one dummy chain memory array DCY is arranged in the X and Y directions on the outer side of the write area (=erasure area). When the dummy chain memory array designation information XYDMC is “0_1_1,” it is indicated that one dummy chain memory array DCY is arranged in the X and Y directions on the inner side of the write area. When the dummy chain memory array designation information XYDMC is “1_2_2,” it is indicated that two dummy chain memory arrays DCY are arranged in the X and Y directions on the outer side of the write area. Also, when the dummy chain memory array designation information XYDMC is “0_2_2,” it is indicated that two dummy chain memory arrays DCY are arranged in the X and Y directions on the inner side of the write area.
As it will be described later, in each of
Since the dummy chain memory array designation information XYDMC is “1_1_1,” one dummy chain memory array is arranged in each of the X direction and the Y direction on the outer side of the write area (=erasure area) in each of
For example, with reference to
Also, as it will be described later, in
In a plan view, these are recognized as one dummy chain memory array DCY row and one dummy chain memory array DCY column that are arranged on the inner side of the write area. Also, data of all memory cells included in all chain memory arrays CY in the erasure area including the plurality of chain memory arrays CY arranged in the matrix becomes “1” (Set state). That is, batch-erasure is performed. Then, only data of “0” (Reset state) is written into each physical address PAD.
For example, in a case of performing an operation of batch-erasure on such a write area, an erasing operation is not performed on the dummy chain memory array DCY arranged on the outer side or the inner side of the write area.
Also, for example, in a case of performing an operation of batch-erasure on such an erasure area, an erasing operation is not performed with respect to the dummy chain memory array DCY arranged on the outer side of the erasure area.
In a case where one memory cell in the X and/or Y direction in a periphery of the erasure area is influenced by a decrease in reliability due to heat disturbance of when the erasing operation is performed on a batch-erasure area in the memory array ARY, the dummy chain memory array designation information XYDMC is set to “1_1_1” or “0_1_1.” Accordingly, one chain memory array CY is arranged as the dummy chain memory array DCY in a periphery of the batch-erasure area. Since the dummy chain memory array DCY is not an object of the erasing operation, it is possible to prevent a decrease in reliability due to heat disturbance.
As it will be described later, in
In a case where two memory cells in the X and Y directions in a periphery of the erasure area is influenced by a decrease in reliability due to heat disturbance of when the erasing operation is performed on the batch-erasure area in the memory array ARY, the dummy chain memory array designation information XYDMC is set to “1_2_2” or “0_2_2.” In such a manner, it is possible to arrange two chain memory arrays CY as the dummy chain memory arrays DCY in the periphery of the batch-erasure area and to prevent a decrease in reliability due to the heat disturbance.
In each of
In a case where one memory cell in an X direction (Y direction) in a periphery of an erasure area is influenced by a decrease in reliability due to heat disturbance of when an erasing operation is performed on a batch-erasure area in the memory array ARY, the dummy chain memory array designation information XYDMC is set to “1_1_0” or “0_1_0” (“1_0_1” or “0_0_1”). Accordingly, it is possible to prevent a decrease in reliability due to heat disturbance by arranging one chain memory array CY as the dummy chain memory array DCY in the periphery of the batch-erasure area.
In such a manner, it is possible to flexibly change an arrangement of the dummy chain memory array DCY according to a degree of an influence of heat disturbance on a peripheral memory cell in a case where the erasing operation is performed on a memory cell and to realize high reliability of the memory module (semiconductor device) NVMMD0.
Since there are various kinds of storage devices such as a hard disk, an SSD, a cache memory, and a main memory, a unit of reading or writing data is different. For example, in storage such as a hard disk or an SSD, reading or writing is performed in a data unit equal to or larger than 512 bytes. Also, a cache memory reads/writes data from/to a main memory in a line size unit (such as 32 byte or 64 byte). Even when a data unit is different in such a manner, it is possible to perform ECC in a different data unit according to ECCFLG and to flexibly correspond to a request with respect to the memory module (semiconductor device) NVMMD0.
Also, in
When WRTFLG is 1, the writing method is as follows. That is, the number of pieces of bit data “0” and the number of pieces of bit data “1” are counted and the number of pieces of bit data “0” and the number of pieces of bit data “1” are compared with each other in the write data WDATA and data of the ECC code ECC generated from the write data WDATA. When the number of pieces of i bit data “0” is larger than the number of pieces of bit data “1,” the information processing circuit MNGER inverts each bit of the write data WDATA and writes the data into the non-volatile memory. On the other hand, when the number of pieces of bit data “0” is not larger than the number of pieces of bit data “1,” the information processing circuit MNGER writes the write data (DATA0) into the non-volatile memory without inverting each bit of the data. Accordingly, the number of pieces of bit data “0” in the write data constantly becomes equal to or smaller than ½. Thus, it is possible to reduce an amount of written bit data “0” by half and to performing writing with low power at high speed.
When the writing method selection information WRTFLG is 2, a writing method is as follows. That is, compressed data CompDATA that is generated by compression of the write data WDATA and the ECC code ECC generated from the write data WDATA is generated and the compressed data CompDATA is written into the non-volatile memory. By the compression, a write size of the compressed data CompDATA becomes smaller than the sum of a write size of the write data WDATA and a write size of the ECC code ECC generated from the write data WDATA. Thus, it is possible to effectively increase capacity of the memory module (semiconductor device) NVMMD0.
As a compressing method of generating the compressed data, there are a run-length code, an LZ code, and the like. A compressing method is selected according to a kind of used data.
A writing method in a case where the writing method selection information WRTFLG is 3 will be described in the following. The writing method in this case is a method of converting the write data WDATA into write data RdcDATA, in which the maximum number of pieces of bit data “0” is limited, and of writing the data into the non-volatile memory. Next, an example of a writing method in a case of writing write data in 32 bits while limiting the maximum number of pieces of written bit data “0” to 8 bits in the write data will be described.
The total number possible combinations T and the total number of pieces of written “0” R in a case where the maximum number of pieces of written bit data “0” is limited to r bits in write data of t bits can be expressed by an expression (1) and an expression (2).
When t=32 and r=8 are assigned to the expressions (1) and (2), T=15033173 and R=114311168. Also, an average number of times of writing “0” bit becomes Ravg=R/T=7.60. Here, when the number of necessary bits in a case where T is expressed by a binary number is I, I=log2 (T)=log 2 (15033173)=23.84.
That is, even in a case where the maximum number of pieces of written bit data “0” is limited to 8 bits, which is ¼ of 32 bits, of write data of 32 bits, it is possible to distinguish data in 15033173 combinations.
In such a manner, it is possible to reduce the number of bits, to which “0” is written, and to realize writing at high speed by a writing method of limiting the maximum number of pieces of bit data “0.”
The writing methods of when the writing method selection information WRTFLG is 1 to 3 have been described. It is possible to set a writing method by combination of these methods. In each of
When the writing method selection information WRTFLG is “2_1,” data is generated first by a method set by “2” of the writing method selection information WRTFLG. Then, with respect to the data generated first, data is generated and written into the non-volatile memory by a method set by “1” of the writing method selection information WRTFLG.
As an example of detail processing, a case where the writing method selection information WRTFLG is “2_1” will be described in the following. First, compressed data CompDATA is generated by compression of the write data WDATA input into the memory module (semiconductor device) NVMMD0 and the ECC code ECC generated from the write data WDATA. Then the number of pieces of bit data “0” and the number of pieces of bit data “1” in the compressed data CompDATA are counted and the number of pieces bit data “0” and the number of pieces of bit data “1” are compared with each other. When the number of pieces of bit data “0” is larger than the number of pieces of bit data “1,” the information processing circuit MNGER inverts each bit of the compressed data CompDATA and writes the data into the non-volatile memory. On the other hand, when the number of pieces of bit data “0” is smaller than the number of pieces of bit data “1,” the compressed data CompDATA is written into the non-volatile memory without inversion of each bit of the data.
Also, for example, when the writing method selection information WRTFLG is “3_2,” data is generated first by a method set by “3” of the writing method selection information WRTFLG. Then, with respect to the data generated first, data is generated and written into the non-volatile memory by a method set by “2” of the writing method selection information WRTFLG.
In this case, specifically, when the write data WDATA that is input into the memory module (semiconductor device) NVMMD0 is 512 bytes, conversion into data RdsData of when the maximum number of pieces of written bit data “0” is limited to 128 bytes is performed. Then, compressed data CompRsdDATA is generated by compression of the data RdsData and an ECC code ECC, which is generated from the data RdsData, and written into the non-volatile memory.
In such a manner, the SSD configuration information (SDCFG) can be programmed arbitrarily. Thus, it is possible to flexibly correspond to levels of a function, performance, and reliability requested to the memory module (semiconductor device) NVMMD0.
The data inversion flag INVFLG indicates whether the main data MDATA written by the control circuit MDLCT0 into the non-volatile memory devices NVM10 to NVM17 is data that is generated by inversion of each bit of original write data. When 0 is written into the data inversion flag INVFLG, it is indicated that data is written without inversion of each bit of the original main data. When 1 is written, it is indicated that data generated by inversion of each bit of the original main data is written.
The writing flag WTFLG indicates a writing method executed in a case where the control circuit MDLCT0 writes the main data MDATA into the non-volatile memory devices NVM10 to NVM17. That is, the writing flag WTFLG corresponds to the writing method selection information WRTFLG described with reference to
The ECC flag ECCFLG indicates a size of the main data MDATA to which an ECC code is generated when the control circuit MDLCT0 writes the main data MDATA into the non-volatile memory devices NVM10 to NVM17. Although it is not specifically limited, it is indicated that a code is generated with respect to a data size of 512 bytes when 0 is written into ECCFLG and it is indicated that a code is generated with respect to a data size of 1024 bytes when 1 is written into ECCFLG. When 2 is written into ECCFLG, it is indicated that a code is generated with respect to a data size of 2048 bytes and it is indicated that a code is generated with respect to a data size of 32 bytes when 3 is written into ECCFLG.
The ECC code ECC is data necessary for detecting and correcting an error of the main data MDATA. ECC is generated by the control circuit MDLCT0 according to the main data MDATA and written into the redundant data RDATA when the control circuit MDLCT0 writes the main data MDATA into the non-volatile memory devices NVM10 to NVM17. The state information STATE indicates whether the main data MDATA written into the non-volatile memory devices NVM10 to NVM17 is in a valid state, an invalid state, or an erased state. Although it is not specifically limited, when 0 is written into the state information STATE, it is indicated that the main data MDATA is in the invalid state. Also, it is indicated that the main data MDATA is in the valid state when 1 is written into the state information STATE and it is indicated that the main data MDATA is in the erased state when 3 is written into the state information STATE.
The area information AREA is information indicating whether data, into which the main data MDATA is written, is written into the first physical address area PRNG1 or the second physical address area PRNG2 in an address map range (ADMAP) illustrated in
Also, in
The data write layer information LYN includes 8 bits of LYN [7:0]. LYN [7] to LYN [0] respectively correspond to the phase-change memory cells CL7 to CL0. For example, when valid data is written into the phase-change memory cell CL0, “1” is written into LYN [0] and “0” is written into the others. Also, for example, when valid data is written into the phase-change memory cell CL1, “1” is written into LYN [1] and “0” is written into the others. Relationships between the phase-change memory cells CL2 to CL7 and LYN [2] to LYN [7] are in a similar manner.
In an example of
In
First, a writing request (WQ01) including a logical address value (such as LAD=0), a data writing instruction (WRT), a sector count value (such as SEC=1), and 512-byte write data (WDATA0) is input into the control circuit MDLCT0 by the information processing device CPU_CP. The interface circuit HOST_IF in
Next, the information processing circuit MNGER decodes the logical address value (LAD=0), the data writing instruction (WRT), and the sector count value (SEC=1) and searches an address conversion table LPTBL (
Next, the information processing circuit MNGER uses the address map range (ADMAP) (
Here, in the information processing circuit MNGER, when the logical address value (LAD=0) is the logical address value in the logical address area LRNG1, the write physical address table NXPADTBL1 in
Next, the information processing circuit MNGER determines whether the current physical address value (PAD=0) and a write physical address value to be a next object of writing (NXPAD=100) are identical (Step 4). When the two are identical, Step 5 is executed. When the two are different, Step 11 is executed. In Step 5, the information processing circuit MNGER writes various kinds of data into addresses corresponding to the physical address value (NXPAD=100) in the non-volatile memory devices NVM10 to NVM17. Here, write data (WDATA0) is written as the main data MDATA illustrated in
Here, for example, when a write layer number NXLYC read from the write physical address table NXPADTBL1 is “10,” the main data MDATA (write data (WDATA0)) and the redundant data RDATA are written into one phase-change memory cell CL0 in each chain memory array CY. Along with this, “0” is written into the data write layer information LYN [7:1] in the redundant data RDATA in
In
On the other hand, when the value of the validity flag CPVLD is 1 in Step 11, it is indicated that the physical address value (PAD=0) corresponding to the logical address value (LAD=0) is still valid. Thus, when the new physical address value (NXPAD=100) is assigned to the logical address value (LAD=0) as it is, there are overlapped physical address values with respect to the logical address value (LAD=0). Thus, in Step 13, the information processing circuit MNGER changes the value of the validity flag CPVLD of the physical address value (PAD=0) corresponding to the logical address value (LAD=0) in the address conversion table LPTBL into 0 (invalid). In addition, the validity flag PVLD corresponding to the physical address value (PAD=0) in the physical address table PADTBL is changed to 0 (invalid). In such a manner, the information processing circuit MNGER executes Step 5 described above after making the physical address value (PAD=0) corresponding to the logical address value (LAD=0) invalid.
In Step 6 performed after Step 5, the information processing circuit MNGER and/or each of the non-volatile memory devices NVM10 to NVM17 checks whether the write data (WDATA 0) is written correctly. When the data is written correctly, Step 7 is executed. When the data is not written correctly, Step 12 is executed. In Step 12, the information processing circuit MNGER and/or each of the non-volatile memory devices NVM10 to NVM17 checks whether the number of times of verify check (Nverify) to check whether the write data (WDATA0) is written correctly is equal to or smaller than the set number of times (Nvr). When the number of times of verify check (Nverify) is equal to or smaller than the set number of times (Nvr), Step 5 and Step 6 are executed again. When the number of times of verify check (Nverify) is larger than the set number of times (Nvr), it is determined that the write data (WDATA0) cannot be written into the write physical address value (NXPAD=100) read from the write physical address tables NXPADTBL1 and NXPADTBL2 (Step 14) and Step 3 is executed again. Note that such data verification processing is performed with write data verification circuits WVO to WVm in the non-volatile memory device illustrated in
In Step 7 performed after Step 6, the information processing circuit MNGER updates the address conversion table LPTBL. More specifically, for example, the new physical address value (NXPAD=100) is written into an address of the logical address value (LAD=0), a value of the validity flag CPVLD is set to 1, and the write layer number NXLYC is written into the layer number LYC. In next Step 8, the information processing circuit MNGER updates the physical address table PADTBL. More specifically, for example, a new value of the number of times of erasure is generated by addition of 1 to the value of the number of times of erasure (NXPERC) of the write physical address value (NXPAD=100) in the write physical address table. Then, the new value of the number of times of erasure is written into a corresponding place (number of times of erasure (PERC) of physical address value (NXPAD=100)) in the physical address table PADTBL. Also, the validity flag PVLD in the physical address table PADTBL is set to 1 and the write layer number NXLYC is written into the layer number LYC.
In Step 9, the information processing circuit MNGER determines whether writing into all write physical addresses NXPAD stored in the write physical address table NXPADTBL is completed. When the writing into all write physical addresses NXPAD stored in the write physical address table NXPADTBL is completed, Step 10 is performed. When the writing is not completed, a new writing request with respect to the memory module NVMMD0 from the information processing device CPU_CP is waited for.
In Step 10, for example, at a time point at which writing into all write physical addresses NXPAD stored in the write physical address table NXPADTBL is completed, the information processing circuit MNGER updates the physical segment table PSEGTBL (
In the update of the physical segment table PSEGTBL, the information processing circuit MNGER refers to a validity flag PVLD and the number of times of erasure PERC of a physical address in the physical address table PADTBL. Then, with a physical address, in which a validity flag PVLD is 0 (invalid), in the physical address table PADTBL as an object, the total number of invalid physical addresses TNIPA, the maximum number of times of erasure MXERC and an invalid physical offset address MXIPAD thereof, and the minimum number of times of erasure MNERC and an invalid physical offset address MNIPAD thereof are updated in each physical segment address SGAD. Also, with a physical address, in which a validity flag PVLD is 1 (valid), in the physical address table PADTBL as an object, the total number of valid physical addresses TNVPA, the maximum number of times of erasure MXERC and a valid physical offset address MXVPAD thereof, and the minimum number of times of erasure MNERC and a valid physical offset address MNVPAD thereof are updated in each physical segment address SGAD.
Also, the information processing circuit MNGER updates the write physical address table NXPADTBL. When the update of the write physical address table NXPADTBL is over, a writing request from the information processing device CPU_CP to the memory module NVMMD0 is waited for.
In such a manner, the information processing circuit MNGER uses the write physical address table NXPADTBL when performing writing into the non-volatile memory devices NVM10 to NVM17. Thus, for example, it is possible to realize a writing operation at high speed compared to a case of searching the physical address table PADTBL for a physical address with the small number of times of erasure in each time of writing. Also, as illustrated in
Also, in an example of the address map range (ADMAP) in
The information processing circuit MNGER uses the write physical address table NXPADTBL1 with respect to the physical address PAD in the range of the first physical address area PRNG1 and updates this. Also, the information processing circuit MNGER uses the write physical address table NXPADTBL2 with respect to the physical address PAD in the second physical address area PRNG2 and updates this. To update the write physical address table NXPADTBL, a physical segment address is determined first and a physical offset address in the determined physical segment address is subsequently determined. As illustrated in
Thus, as illustrated in
Then, a physical segment address (SGADmn) having the minimum value (MNERCmn) and a physical offset address thereof (MNIPADmn) are determined as a first candidate to be registered into the write physical address table NXPADTBL (Step 24). Note that for existence of the physical segment address SGAD selected in Step 22, a size of a physical address space is made to be larger than a size of a logical address space at least for a size of addresses that can register the write physical address table NXPADTBL.
Then, the information processing circuit MNGER refers to the physical address table PADTBL (
On the other hand, when the value of the number of times of erasure PERC is larger than the threshold for the number of times of erasure ERCth, the information processing circuit MNGER temporarily removes the physical offset address PPAD, which is the current candidate, from the candidate and performs Step 32. In Step 32, the information processing circuit MNGER refers to the physical address table PADTBL and determines whether the number of physical offset addresses in the invalid state (Ninv) which addresses have the number of times of erasure equal to or smaller than the threshold for the number of times of erasure ERCth in the physical segment address (SGADmn) is smaller than the number of addresses N that can register the write physical address table NXPADTBL (Ninv<N). When the number is smaller, Step 33 is performed. When the number is larger, Step 34 is performed.
In Step 34, the information processing circuit MNGER performs calculation of the physical offset address PPAD that is the current candidate, generates a physical offset address PPAD to be a new candidate, and executes Step 25 again. In Step 34, a p value is added to the current physical offset address PPAD and a physical offset address PPAD to be a new candidate is calculated. The p value in Step 34 can be programmed and an optimal value is selected according to a minimum data size managed by the information processing circuit MNGER or a configuration of the non-volatile memory. In the present embodiment, for example, p=8 is used. In Step 33, the information processing circuit MNGER generates a new threshold for the number of times of erasure ERCth generated by addition of a certain value a to the threshold for the number of times of erasure ERCth and executes Step 25 again.
In Step 26, it is checked whether the physical offset address PPAD that becomes an object of registration in Step 25 is an address in the first physical address area PRNG1. When the physical offset address PPAD that becomes the object of registration is an address in the first physical address area PRNG1, Step 27 is executed. When the address is not an address in the first physical address area PRNG1 (that is, when address is address in second physical address area PRNG2), Step 28 is executed.
In Step 27, the information processing circuit MNGER registers an address, in which the physical segment address (SGADmn) is included in the physical offset address PPAD that becomes the object of registration, as a write physical address NXPAD into the write physical address table NXPADTBL1. In addition, a value of the validity flag NXPVLD (here, it is 0) of the write physical address NXPAD is registered and a value of the number of times of erasure (PERC) of the write physical address NXPAD is registered as the number of times of erasure NXPERC. Also, a value generated by addition of 1 to a current layer number LYC of the write physical address NXPAD is registered as a new layer number NXLYC. Although it is not specifically limited, N/2 pairs can be resisted into the write physical address table NXPADTBL1 in ascending order in the entry number ENUM.
As illustrated in
In Step 28, the information processing circuit MNGER registers an address, in which the physical segment address (SGADmn) is included in a physical offset address PPAD that is an object of registration, as the write physical address NXPAD into the write physical address table NXPADTBL2. In addition, a value of the validity flag NXPVLD (here, it is 0) of the write physical address NXPAD is registered and the number of times of erasure (PERC) and a current layer number LYC of the write physical address NXPAD are registered as the number of times of erasure NXPERC and a layer number NXLYC. Although it is not specifically limited, N/2 pairs can be registered into the write physical address table NXPADTBL2 in ascending order in the entry number ENUM. Note that the number of registered pairs in the write physical address tables NXPADTBL1 and NXPADTBL2 can be set arbitrarily by the information processing circuit MNGER and is set in such a manner that writing speed with respect to the non-volatile memory devices NVM10 to NVM17 becomes the highest.
In next Step 29, the information processing circuit MNGER checks whether registration is completed with respect to all pairs (all entry numbers) in the write physical address table NXPADTBL1. When the registration of all pairs is not completed, Step 32 is executed. When the registration of all pairs is completed, Step 30 is executed. In next Step 30, the information processing circuit MNGER checks whether registration of all pairs in the write physical address table NXPADTBL2 is completed. When the registration of all pairs is not completed, Step 32 is executed. When registration of all pairs is completed, the update of the write physical address table NXPADTBL is completed (Step 31).
When such an update flow is used, roughly, a physical address segment having a physical address with the minimum number of times of erasure is determined (Step 21 to Step 24) and physical addresses with the number of times of erasure equal to or smaller than a predetermined threshold are serially extracted with the smallest physical address as an origin in the physical address segment (Step 25, and Step 32 to Step 34). Here, when the number of extracted addresses is smaller than a predetermined number of registrations (Step 32), a threshold for the number of times of erasure is gradually increased (Step 33) and physical addresses are serially extracted in a similar manner (Step 25 and Step 34) until the number of extracted addresses satisfies the predetermined number of registrations (Step 32, Step 29, and Step 30). Accordingly, wear leveling (dynamic wear leveling) to perform leveling of the number of times of erasure of physical addresses in the invalid state (that is, physical address that is not currently assigned to logical address) can be realized.
In each of
Although it is not specifically limited, the following is assumed. That is, there are eight chips of the non-volatile memories NVM10 to NVM17. In one chip of the non-volatile memory device, there are two chain memory array selection lines SL. In one chain memory array CY, there are eight memory cells and eight memory-cell selection lines LY. Also, in one memory bank BK, there are 528 memory arrays ARY. One chain memory array CY is selected in one memory array ARY. That is, 528 chain memory arrays CY are simultaneously selected in the one memory bank BK. There are four memory banks. In the first physical address area PRNG1 in
Assignment of an address in each of
The layer number LYC [2:0] corresponds to a column address COL [2:0]. The column address COL [2:0] corresponds to a memory-cell selection line LY [2:0]. A value of the layer number LYC [2:0] becomes a value of the memory-cell selection line LY [2:0] and data is written into a memory cell designated by the layer number LYC [2:0]. Also, data is read from a memory cell designated by the layer number LYC [2:0].
A physical address CPAD [0] corresponds to a column address COL [3]. The column address COL [3] corresponds to the chain memory array selection line SL [0]. A physical address CPAD [2:1] corresponds to a column address COL [5:4] and the column address COL [5:4] corresponds to a bit line BL [1:0]. The physical address PAD [c+0:0] corresponds to a column address COL [c+6:6] and the column address COL [c+6:6] corresponds to a bit line BL [c:2]. A physical address PAD [d+c+1:1] corresponds to a row address ROW [d+c+7:7] and the row address ROW [d+c+7:7] corresponds to the word line WL [d:0].
A physical address CPAD [d+c+3:d+c+2] corresponds to a bank address BK [d+c+9:d+c+8] and the bank address BK [d+c+9:d+c+8] corresponds to a bank address BK [1:0]. A physical address CPAD [d+c+6:d+c+4] corresponds to a chip address BK [d+c+12:d+c+10] and the chip address CHIPA [d+c+12:d+c+10] corresponds to a chip address CHIPA [2:0].
Here, for example, a case of writing 512-byte main data and 16-byte redundant data is assumed.
It is assumed that a physical address PAD [d+c+6:d+c+4] is 3, a physical address PAD [d+c+3:d+c+2] is 2, a physical address PAD [d+c+1:c+1] is 8, a physical address CPAD [c+0:0] is 0, a physical address CPAD [2:1] is 0, a physical address CPAD [0] is 0, and the layer number LYC [2:0] is 0. In this case, the information processing circuit MNGER in
That is, in a case of this example, in
One the other hand, in
The physical address CPAD [2:0] corresponds to a column address COL [2:0] and the column address COL [2:0] corresponds to a memory-cell selection line LY [2:0]. A value of the physical address CPAD [2:0] becomes a value of the memory-cell selection line LY [2:0] and data is written into a memory cell designated by the physical address CPAD [2:0]. Also, data is read from the memory cell designated by the physical address CPAD [2:0].
A physical address PAD [0] corresponds to a column address COL [3] and the column address COL [3] corresponds to a chain memory array selection line SL [0]. A physical address PAD [a+1:1] corresponds to a column address COL [a+1:1]. The column address COL [a+1:1] corresponds to a bit line BL [a:0]. A physical address PAD [b+a+2:2] corresponds to a row address ROW [b+a+2:2] and the row address ROW [b+a+2:2] corresponds to a word line WL [b:0].
A physical address PAD [b+a+4:b+a+3] corresponds to a bank address BK [b+a+4:b+a+3] and the bank address BK [b+a+4:b+a+3] corresponds to a bank address BK [1:0]. A physical address PAD [b+a+7:b+a+5] corresponds to a chip address BK [b+a+7:b+a+5] and a chip address CHIPA [b+a+7:b+a+5] corresponds to the chip address CHIPA [2:0].
Here, for example, a case of writing 512-byte main data and 16-byte redundant data is assumed. It is assumed that a physical address PAD [b+a+7:b+a+5] is 3, a physical address PAD [b+a+4:b+a+3] is 2, a physical address PAD [b+a+2:a+2] is 8, a physical address PAD [a+1:1] is 0, a physical address PAD [0] is 0, and a physical address CPAD [2:0] is 0.
In this case, the information processing circuit MNGER in
That is, in a case of this example, in
Next, the information processing circuit MNGER checks whether a value of the variable q is equal to or larger than n (Step 45). When the value of the variable q is smaller than n, a new physical address CPAD generated by addition of 1 to a physical address CPAD is calculated (Step 47) and Step 43 is executed again. Then, Step 44 is executed. When the value of the variable q is equal to or larger than n, the sector count SEC is reduced by one and the value of the variable q is set to 0 (Step 46). Then, Step 51 is executed. In Step 51, it is checked whether a value of the sector count SEC is equal to or smaller than 0. When the value of the sector count SEC is not equal to or smaller than 0, a new physical address PAD generated by addition of 1 to a physical address PAD is calculated (Step 52). Then, the processing is brought back to Step 42 again and kept performed. When the value of the sector count SEC is equal to or smaller than 0, writing or reading of data is completed (Step 53).
In a case where 1 is added to the physical address CPAD in Step 47, a chain memory array selection line SL or a bit line BL (that is, position of chain memory array CY) is changed as it is understood from
In Step 48, the information processing circuit MNGER performs address conversion illustrated in
When 1 is added to the physical address CPAD in Step 47, a memory-cell selection line LY (that is, position of memory cell in chain memory array CY) is changed as it is understood from
Note that an n value in Step 45 or an r value in Step 50 can be programed. An optimal value is selected according to a minimum data size managed by the information processing circuit MNGER or a configuration of the non-volatile memory device. In the present embodiment, for example, n=r=7 is used.
Each of
The address conversion table LPTBL includes a physical address PAD corresponding to a logical address LAD, and a validity flag CPVLD and a layer number LYC of the physical address. Also, the address conversion table LPTBL is stored into the random access memory RAM. The non-volatile memory device stores data DATA, a logical address LAD, a data validity flag DVF, and a layer number LYC that correspond to the physical address PAD.
In
The writing request WQ0 includes a logical address value (LAD=0), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA0). The writing request WQ1 includes a logical address value (LAD=1), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA1). The writing request WQ2 includes a logical address value (LAD=2), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA2). The writing request WQ3 includes a logical address value (LAD=3), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA3). When the writing requests WQ0, WQ1, WQ2, and WQ3 are input into the control circuit MDLCT0, an interface circuit HOST_IF transfers these writing requests to the buffer BUF0.
Then, the information processing circuit MNGER serially reads the writing requests WQ0, WQ1, WQ2, and WQ3 stored in the buffer BUF0. Since the logical address values (LAD) of the writing requests WQ0, WQ1, WQ2, and WQ3 are respectively 0, 1, 2, and 3, the information processing circuit MNGER reads information corresponding to these from the address conversion table LPTBL, which is stored in the random access memory RAM, through a memory control circuit RAMC. That is, a value of a physical address (PAD), a value of a validity flag (CPVLD), and a layer number LYC are read from each of an address 0, an address 1, an address 2, and an address 3 of the logical address LAD in the address conversion table LPTBL.
As illustrated in
Then, the information processing circuit MNGER generates ECC codes ECC0, 1, 2, and 3 respectively corresponding to write data DATA0, 1, 2, and 3 of the writing request WQ0, 1, 2, and 3 and generates, according to a data format illustrated in
The information processing circuit MNGER respectively writes the write data WDATA0, 1, 2, and 3 into four physical addresses in the non-volatile memory device. The redundant data RDATA0, 1, 2, and 3 respectively include the ECC codes ECC0, 1, 2, and 3. In addition, a data inversion flag value (INVFLG=0), a writing flag value (WTFLG=0), an ECC flag value (ECCFLG=0), a state information value (STATE=1), an area information value (AREA=1), a data write layer information value (LYN=1), a bad block information value (BADBLK=0), and a preliminary area value (RSV=0) are included in common.
Note that in a case where a writing request is for the logical address area LRNG1, the area information value (AREA) becomes 1. In a case where a writing request is for the logical address area LRNG2, the area information value (AREA) becomes 2. Also, when a layer number NXLYC value read from the written physical address table NXPADTBL1 is 0 (actually, “10”), LYN [n:1] becomes 0 and LYN [0] becomes 1 in the data write layer information LYN [n:0]. Also, it is indicated that data is written into the phase-change memory cell CL0 in the chain memory array CY.
In addition, according to decimal numbers 0, 1, 2, and 3 of the write physical address values (NXPAD), the information processing circuit MNGER performs writing on the non-volatile memory devices NVM10 to NVM17 through the arbitration circuit ARB and the memory control circuits NVCT10 to NVCT17. That is, to the address 0 of the physical address PAD of the non-volatile memory device NVM, the write data WDATA0, a logical address value (LAD=0), and a layer number (LYC=0) corresponding to the writing request WQ0 are written and 1 is written as a value of a data validity flag (DVF). To the address 1 of the physical address PAD of the non-volatile memory device NVM, the write data WDATA1, a logical address value (LAD=1), and a layer number (LYC=0) corresponding to the writing request WQ1 are written and 1 is written as a value of a data validity flag (DVF). Similarly, to the address 2 of the physical address PAD, the write data WDATA2, a logical address value (LAD=2), a data validity flag (DVF=1), and a layer number (LYC=0) are written. To the address 3 of the physical address PAD, the write data WDATA3, a logical address value (LAD=3), a data validity flag (DVF=1), and a layer number (LYC=0) are written.
Finally, the information processing circuit MNGER updates the address conversion table LPTBL, which is stored in the random access memory RAM, through the memory control circuit RAMC. That is, to the address 0 of the logical address LAD, a physical address (PAD=0), a validity flag (CPVLD=1), and a layer number (LYC=0) after the assignment are written. To the address 1 of the logical address LAD, a physical address (PAD=2), a validity flag (CPVLD=1), and a layer number (LYC=0) after the assignment are written. To the address 2 of the logical address LAD, a physical address (PAD=2), a validity flag (CPVLD=1), and a layer number (LYC=0) after the assignment are written. To the address 3 of the logical address LAD, a physical address (PAD=3), a validity flag (CPVLD=1), and a layer number (LYC=0) after the assignment are written.
In
The writing request WQ4 includes a logical address value (LAD=0), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA4). The writing request WQ5 includes a logical address value (LAD=1), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA5). The writing request WQ6 includes a logical address value (LAD=4), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA6). The writing request WQ7 includes a logical address value (LAD=5), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA7). The writing request WQ8 includes a logical address value (LAD=2), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATAB). The writing request WQ9 includes a logical address value (LAD=3), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA9). When the writing requests WQ4, WQ5, WQ6, WQ7, WQ8, and WQ9 are input into the control circuit MDLCT0, the interface circuit HOST_IF transfers these writing requests to the buffer BUF0.
Next, the information processing circuit MNGER serially reads the writing requests WQ4, WQ5, WQ6, WQ7, WQ8, and WQ9 stored in the buffer BUF0. Subsequently, according to the data format illustrated in
The redundant data RDATA4, 5, 6, 7, 8, and 9 respectively include ECC codes ECC4, 5, 6, 7, 8, and 9 generated by the information processing circuit MNGER with utilization of the write data DATA4, 5, 6, 7, 8, and 9. In addition, a data inversion flag value (INVFLG=0), a writing flag value (WTFLG=0), an ECC flag value (ECCFLG=0), a state information value (STATE=1), an area information value (AREA=1), a bad block information value (BADBLK=0), and a preliminary area value (RSV=0) are included in common.
The information processing circuit MNGER respectively writes the write data WDATA4, 5, 6, 7, 8, and 9 into six physical addresses in the non-volatile memory device. Here, since the logical address values (LAD) of the writing requests WQ4, 5, 6, 7, 8, and 9 are respectively 0, 1, 4, 5, 2, and 3, the information processing circuit MNGER reads information corresponding to these from the address conversion table LPTBL, which is stored in the random access memory RAM, through the memory control circuit RAMC. That is, a physical address value (PAD), a validity flag value (CPVLD), and a layer number LYC are read from each of the address 0, the address 1, the address 4, the address 5, the address 2, and the address 3 of the logical address LAD in the address conversion table LPTBL.
In the address conversion table LPTBL in
Also, in the address conversion table LPTBL in
On the other hand, in the address conversion table LPTBL in
Next, the information processing circuit MNGER reads write physical address values (NXPAD) and layer numbers NXLYC stored in 4 to 9 of the entry number ENUM in the write physical address table NXPADTBL1 and respectively assigns these to the address 0, the address 1, address 4, the address 5, the address 2, and the address 3 of the logical addresses LAD. In this example, the write physical address values (NXPAD) stored in 4 to 9 of the entry number ENUM are respectively 4, 5, 6, 7, 8, and 9 and the layer numbers NXLYC are respectively 1, 1, 1, 1, 1, and 1.
Then, the information processing circuit MNGER performs writing into the non-volatile memory devices NVM10 to NVM17 through the arbitration circuit ARB and the memory control circuits NVCT10 to NVCT17 according to the write physical address values (NXPAD) 4, 5, 6, 7, 8, and 9. That is, to the address 4 of the physical address PAD of the non-volatile memory device NVM, write data WDATA4, a logical address value (LAD=0), and a layer number (LYC=1) corresponding to the writing request WQ4 are written and 1 is written as a value of a data validity flag (DVF). To the address 5 of the physical address PAD of the non-volatile memory device NVM, write data WDATA5, a logical address value (LAD=1), and a layer number (LYC=1) corresponding to the writing request WQ5 are written and 1 is written as a value of a data validity flag (DVF).
Also, to the address 6 of the physical address PAD of the non-volatile memory device NVM, the information processing circuit MNGER writes write data WDATA6, a logical address value (LAD=4), and a layer number (LYC=1) corresponding to the writing request WQ6 and writes 1 as a value of a data validity flag (DVF). Similarly, to the address 7 of the physical address PAD of the non-volatile memory device NVM, write data WDATA7, a logical address value (LAD=5), and a layer number (LYC=1) corresponding to the writing request WQ7 are written and 1 is written as a value of a data validity flag (DVF).
Moreover, to the address 8 of the physical address PAD of the non-volatile memory device NVM, the information processing circuit MNGER writes write data WDATA8, a logical address value (LAD=2), and a layer number (LYC=1) corresponding to the writing request WQ8 and 1 is written as a value of a data validity flag (DVF). Similarly, to the address 9 of the physical address PAD of the non-volatile memory device NVM, write data WDATA9, a logical address value (LAD=3), and a layer number (LYC=1) corresponding to the writing request WQ9 are written and 1 is written as a value of a data validity flag (DVF).
Each of
In
The writing request WQ0 includes a logical address value (LAD=“800000”) in a hexadecimal number, a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA0). The writing request WQ1 includes a logical address value (LAD=“800001”) in a hexadecimal number, a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA1). The writing request WQ2 includes a logical address value (LAD=“800002”) in a hexadecimal number, a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA2). The writing request WQ3 includes a logical address value (LAD=“800003”) in a hexadecimal number, a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA3).
When the writing requests WQ0, WQ1, WQ2, and WQ3 are input into the control circuit MDLCT0, the interface circuit HOST_IF transfers these writing requests to the buffer BUF0. Then, the information processing circuit MNGER serially reads the writing request WQ0, WQ1, WQ2, and WQ3 stored in the buffer BUF0. Here, the information processing circuit MNGER refers to the address conversion table LPTBL, which is stored in the random access memory RAM, through the memory control device RAMC and reads various kinds of information corresponding to the writing requests WQ0, 1, 2, and 3. More specifically, a physical address value (PAD) and a validity flag CPVLD are read from each of an address “800000,” an address “800001,” an address “800002,” and an address “800003” of the logical address LAD in the address conversion table LPTBL.
It is understood that no physical address PAD is assigned to each of the address “800000,” the address “800001,” the address “800002,” and the address “800003” of the logical address LAD at first since all of the read validity flags CPVLD are 0 as illustrated in
The redundant data RDATA0, 1, 2, and 3 respectively include ECC codes ECC0, 1, 2, and 3 generated by the information processing circuit MNGER with utilization of the write data DATA0, 1, 2, and 3. In addition, a data inversion flag value (INVFLG=0), a writing flag value (WTFLG=0), an ECC flag value (ECCFLG=0), a state information value (STATE=1), an area information value (AREA=1), a bad block information value (BADBLK=0), and a preliminary area value (RSV=0) are included in common.
The information processing circuit MNGER respectively writes the write data WDATA0, 1, 2, and 3 into four physical addresses of the non-volatile memory device. Here, for example, the information processing circuit MNGER reads write physical addresses NXPAD stored in 16 to 19 of the entry number ENUM in the write physical address table NXPADTBL2 and assigns these addresses to logical addresses according to the writing requests WQ0 to WQ3. Here, it is assumed that the write physical address values (NXPAD) are “2800000,” “2800001,” “2800002,” and “2800003.” The information processing circuit MNGER respectively assigns these to the address “800000,” the address “800001,” the address “800002,” and the address “800003” of the logical address LAD.
According to the write physical address values (NXPAD), the information processing circuit MNGER performs writing into the non-volatile memory devices NVM10 to NVM17 through the arbitration circuit ARB and the memory control circuits NVCT10 to NVCT17. More specifically, to the address “2800000” of the physical address PAD of the non-volatile memory device, write data WDATA0 and a logical address value (LAD=“800000”) corresponding to the writing request WQ0 are written and 1 is written as a data validity flag DVF. To the address “2800001” of the physical address PAD of the non-volatile memory device, write data WDATA1 and a logical address value (LAD=“800001”) corresponding to the writing request WQ1 are written and 1 is written as a data validity flag DVF.
Also, to the address “2800002” of the physical address PAD of the non-volatile memory device, the information processing circuit MNGER writes write data WDATA2 and a logical address value (LAD=“800002”) corresponding to the writing request WQ2 and writes 1 as a data validity flag DVF. Similarly, to the address “2800003” of the physical address PAD of the non-volatile memory device, write data WDATA3 and a logical address value (LAD=“800003”) corresponding to the writing request WQ3 are written and 1 is written as a data validity flag DVF.
Finally, the information processing circuit MNGER updates the address conversion table LPTBL, which is stored in the random access memory RAM, through the memory control circuit RAMC. More specifically, to the address “800000” of the logical address LAD in the address conversion table LPTBL, a physical address value (PAD=“2800000”) and a validity flag value (CPVLD=1) are written. Also, to the address “800001” of the logical address LAD, a physical address value (PAD=“2800001”) and a validity flag value (CPVLD=1) are written. Similarly, to the address “800002” of the logical address LAD, a physical address value (PAD=“2800002”) and a validity flag value (CPVLD=1) are written. To the address “800003” of the logical address LAD, a physical address value (PAD=“2800003”) and a validity flag value (CPVLD=1) are written.
In
The writing request WQ4 includes a logical address value (LAD=“800000”), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA4). The writing request WQ5 includes a logical address value (LAD=“800001”), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA5). The writing request WQ6 includes a logical address value (LAD=“800004”), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA6). The writing request WQ7 includes a logical address value (LAD=“800005”), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA7). The writing request WQ8 includes a logical address value (LAD=“800002”), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA8). The writing request WQ9 includes a logical address value (LAD=“800003”), a data writing instruction (WRT), a sector count value (SEC=1), and write data (DATA9).
When the writing requests WQ4, WQ5, WQ6, WQ7, WQ8, and WQ9 are input into the control circuit MDLCT0, the interface circuit HOST_IF transfers these writing requests to the buffer BUF0. Then, the information processing circuit MNGER serially reads the writing requests WQ4, WQ5, WQ6, WQ7, WQ8, and WQ9 stored in the buffer BUF0. Then, according to the data format illustrated in
The write data WDATA4 includes main data MDATA4, which includes the write data DATA4, and redundant data RDATA4 thereof. The write data WDATA5 includes main data MDATA5, which includes the write data DATA5, and redundant data RDATA5 thereof. The write data WDATA6 includes main data MDATA6, which includes the write data DATA6, and redundant data RDATA6 thereof. The write data WDATA7 includes main data MDATA7, which includes the write data DATA7, and redundant data RDATA7 thereof. The write data WDATA8 includes main data MDATA8, which includes the write data DATA8, and redundant data RDATA8 thereof. The write data WDATA9 includes main data MDATA9, which includes the write data DATA9, and redundant data RDATA9.
The redundant data RDATA4, 5, 6, 7, 8, and 9 respectively include ECC codes ECC 4, 5, 6, 7, 8, and 9 generated by the information processing circuit MNGER with utilization of the write data DATA4, 5, 6, 7, 8, and 9. In addition, a data inversion flag value (INVFLG=0), a writing flag value (WTFLG=0), an ECC flag value (ECCFLG=0), a state information value (STATE=1), an area information value (AREA=1), a bad block information value (BADBLK=0), and a preliminary area value (RSV=0) are included in common.
The information processing circuit MNGER respectively writes the write data WDATA4, 5, 6, 7, 8, and 9 into six physical addresses of the non-volatile memory device. Here, the information processing circuit MNGER refers to the address conversion table LPTBL, which is stored in the random access memory RAM, through a memory control circuit RAMC and reads various kinds of information corresponding to the writing requests WQ4, 5, 6, 7, 8, and 9. More specifically, a physical address PAD and a validity flag CPVLD are read from each of the address “800000,” the address “800001,” the address “800004,” the address “800005,” the address “800002,” and the address “800003” of the logical address LAD in the address conversion table LPTBL.
In the address conversion table LPTBL in
On the other hand, in the address conversion table LPTBL in
Also, in the address conversion table LPTBL in
Then, according to the writing requests WQ4 to WQ9, the information processing circuit MNGER reads write physical addresses NXPAD stored in 20 to 25 of the entry number ENUM in the write physical address table NXPADTBL2 and assigns these to logical addresses. Here, it is assumed that the write physical address values (NXPAD) are “2800004,” “2800005,” “2800006,” “2800007,” “2800008,” and “2800009.” Then, these values are respectively assigned to the address “800000,” the address “800001,” the address “800004,” the address “800005,” the address “800002,” and the address “800003” of the logical address LAD.
Then, according to the assignment of these physical addresses, the information processing circuit MNGER performs writing on the non-volatile memory devices NVM10 to NVM17 through the arbitration circuit ARB and the memory control circuits NVCT10 to NVCT17. More specifically, to the address “2800004” of the physical address PAD of the non-volatile memory device NVM, write data WDATA4 and a logical address value (LAD=“800000”) corresponding to the writing request WQ4 are written and 1 is written into a data validity flag DVF. To the address “2800005” of the physical address PAD, write data WDATA5 and a logical address value (LAD=“800001”) corresponding to the writing request WQ5 are written and 1 is written into a data validity flag DVF.
Similarly, to the address “2800006” of the physical address PAD, write data WDATA6 and a logical address value (LAD=“800004”) corresponding to the writing request WQ6 are written and 1 is written into a data validity flag DVF. To the address “2800007” of the physical address PAD, write data WDATA7 and a logical address value (LAD=“800005”) corresponding to the writing request WQ7 are written and 1 is written into a data validity flag DVF. To the address “2800008” of the physical address PAD, write data WDATA8 and a logical address value (LAD=“800002”) corresponding to the writing request WQ8 are written and 1 is written into a data validity flag DVF. To the address “2800009” of the physical address PAD, write data WDATA9 and a logical address value (LAD=“800003”) corresponding to the writing request WQ9 are written and 1 is written as a data validity flag DVF. Finally, the information processing circuit MNGER updates the address conversion table LPTBL, which is stored in the random access memory RAM, into a state illustrated in
Then, the information processing circuit MNGER decodes the logical address value (LAD=0), the data-reading instruction (RD), and the sector count value (SEC=1), refers to the address conversion table LPTBL stored in the random access memory RAM, and reads various kinds of information. More specifically, in the address conversion table LPTBL, a physical address value PAD (such as PAD=0) stored at an address 0 of the logical address LAD, and a validity flag CPVLD and a layer number LYC corresponding to the physical address PAD are read (Step 62). Then, it is checked whether the read validity flag CPVLD is 1 (Step 63).
When the validity flag CPVLD is 0, the information processing circuit MNGER recognizes that no physical address PAD is assigned to the logical address value (LAD=0). In this case, it is not possible to read data from the non-volatile memory device NVM. Thus, through the interface circuit HOST_IF, the information processing circuit MNGER informs the information processing device CPU_CP of generation of an error (Step 75).
The memory module NVMMD0 of this embodiment includes a normal mode, an erasure priority mode, and a reading priority mode. Although it is not specifically limited, these modes are set into the memory module NVMMD0 by the information processing device CPU_CP. In the flow in
When it is determined in Step 63 that the read validity flag CPVLD is 1, batch-erasure in the erasure area is performed in Step 64. When the batch-erasure in the erasure area is completed in Step 64, Step 65 is executed next. Note that the erasure area that is erased in Step 64 is arbitrated by the arbitration circuit ARB in such a manner as to be an area different from an area where reading is performed. Also, the erasure area the batch-erasure of which is performed here is an area excluding the dummy chain memory array DCY designated by the dummy chain memory array designation information XYDMC. That is, an erasing operation is not performed with respect to the dummy chain memory array DCY provided in a manner physically adjacent to the erasure area where the batch-erasure is performed.
When the information processing circuit MNGER determines that the logical address value (LAD=0) corresponds to the physical address value PAD (PAD=0), Step 65 is executed after the erasure operation in Step 64 is completed. When the physical address value PAD (PAD=0) corresponding to the logical address value (LAD=0) is an address in the first physical address area PRNG1, the physical address value PAD (PAD=0), a physical address value CPAD (CPAD=0), and a layer number LYC are converted into the chip address CHIPA, the bank address BK, the row address ROW, and the column address COL of the non-volatile memory device NVM illustrated in
Then, the information processing circuit MNGER reads a logical address area LRNG in SSD configuration information (SDCFG) stored in the non-volatile memory NVM. Then, it is checked to which logical address area LRNG the logical address value (LAD=0) belongs. Moreover, a value of the writing flag WTFLG included in the read redundant data RDATA0 is checked in Step 66. That is, as described with reference to
When the value of the writing flag WTFLG is 0 as a result of the checking, Step 72 is executed next. When the value is 1, Step 67 is executed next. When the value is 2, Step 68 is executed next. When the value is 3, Step 70 is executed next. Similarly, when the value of the writing flag WTFLG is 2_1, Step 69 is executed next. When the value is 3_2, Step 71 is executed next.
When the writing flag WTFLG is 0, data is written into the non-volatile memory device NVM without processing. Thus, in Step 72, the read data (main data MDATA0) is sent to Step 73. When the writing flag WTFLG is 1, data is inverted when being written. Thus, in Step 67, the read data (main data MDATA0) is inverted and sent to Step 73. Also, when the writing flag WTFLG is 2, data is compressed and written. Thus, in Step 68, the read data (main data MDATA0) is decompressed (Decomp) and sent to Step 73. When the writing flag WTFLG is 3, data is coded (code) and written. Thus, in Step 70, the read data (main data MDATA0) is decoded (Decode) and sent to Step 73.
When the writing flag WTFLG is 2_1, the data is compressed, inverted, and written. Thus, in Step 69, the read data (main data MDATA0) is inverted, decompressed (Decomp), and sent to Step 73. When the writing flag WTFLG is 3_2, the read data is coded and compressed. Thus, in Step 71, the read data (main data MDATA0) is decompressed (Decomp), decoded (Decode), and sent to Step 73.
In such a manner, processing corresponding to a value of the read writing flag WTFLG is executed in Step 67 to Step 72 and main data (MDATA0) and an ECC code (CCO) to which the processing corresponding to the writing method is applied are acquired. In Step 73, the information processing circuit MNGER checks whether there is an error in the main data (MDATA) by using the ECC code (ECC0). When there is an error, the error is corrected. When there is no error or when the error is corrected, data without an error is transferred to the information processing device CPU_CP through the interface circuit HOST_IF (Step 74).
Although, it is not specifically limited, in a case of performing a reading operation, the reading operation is not executed on a dummy chain memory array DCY arranged in a periphery (physically adjacent area) of an area to be an object of the reading operation (erasure area). Accordingly, it becomes possible to reduce the number of cells selected in the reading operation and to increase speed of the reading operation.
In
In
After the main data MDATA1 and the redundant data RDATA1 are read, the erasing operation that is temporarily stopped is resumed in Step 87. Also, when it is determined in Step 84 that the erasing operation is not executed, the main data (MDATA1) and the redundant data (RDATA1) are read in Step 97 similarly to Step 86.
The main data (MDATA1) and the redundant data (RDATA1) read in Step 86 or Step 97 are sent to Step 88 and processing similar to the processing described in
In this embodiment, since the erasing operation is temporarily stopped, it becomes possible to reduce response time of the reading operation. Also, in this embodiment, in a case of performing a reading operation, the reading operation is not executed on a dummy chain memory array DCY arranged in a periphery (physically adjacent area) of an area to be an object of the reading operation (erasure area). Accordingly, it becomes possible to reduce the number of cells selected in the reading operation and to increase speed of the reading operation.
Also, an example of setting the memory module NVMMD0 to the reading priority mode has been described. However, alternatively, a reading priority command may be prepared as a command to be supplied to the memory module NVMMD0. In this case, the memory module NVMMD0 is configured in such a manner that the flow in
First, a writing request (WQ01) including a logical address value (LAD), a data writing instruction (WRT), a sector count value (SEC=1), and 512-byte write data (DATA0) is input into the information processing circuit MNGER by the information processing device CPU_CP through the interface circuit HOST_IF and stored into the buffer BUF0 (Step 101). The information processing circuit MNGER uses the address map range (ADMAP) stored in the random access memory RAM and determines whether the logical address value (LAD) is a logical address value in the logical address area LRNG1 or a logical address value in the logical address area LRNG2. Also, writing method selection information WRTFLG is read. Moreover, a write physical address NXPAD corresponding to a logical address is read from the write physical address table NXLPADBL (Step 102).
According the read writing method selection information WRTFLG, the information processing circuit MNGER selects a writing method in Step 103. That is, according to contents of the writing method selection information WRTFLG, one of Step 104 to Step 109 is selected. When the writing method selection information WRTFLG is 0, Step 109 is selected as a method of writing. In this case, write data is prepared as write data wdata without being processed and ECC data that is based on the write data wdata is generated in Step 115. Also, in Step 115, a value of the writing flag WTFLG is generated as 0 (WTFLG 0). When the writing method selection information WRTFLG is 1, Step 104 is selected as a writing method. In this case, in Step 104, write data is inverted. The inverted data is prepared as write data wdata in Step 110 and ECC data that is based on the write data wdata is generated. Also, in Step 110, a value of the writing flag WTFLG is generated as 1 (WTFLG 1).
When the writing method selection information WRTFLG is 2, Step 105 is selected. In this case, write data is compressed (Comp) in Step 105. In Step 111, the compressed write data is set as write data wdata and ECC data that is based on the write data wdata is generated. Moreover, in Step 111, a value of the writing flag WTFLG is generated as 2 (WTFLG 2). When the writing method selection information WRTFLG is 3, Step 107 is selected. In this case, write data is coded (Code). The coded write data is set as write data wdata in Step 113 and ECC data that is based on the write data wdata is generated. Also, in Step 113, a value of the writing flag WTFLG is generated as 3 (WTFLG 3).
When the writing method selection information WRTFLG is 2_1, Step 106 is selected. In this case, write data is compressed and inverted in Step 106. The compressed and inverted write data is set as write data wdata in Step 112 and ECC data that is based on the write data wdata is generated. Also, in Step 112, a value of the writing flag WTFLG is generated as 2_1 (WTFLG 2_1). When the writing method selection information WRTFLG is 3_2, Step 108 is selected. In this case, write data is coded and compressed in Step 108. The coded and compressed write data is set as write data wdata in Step 114 and ECC data that is based on the write data wdata is generated. Also, in Step 114, a value of the writing flag WTFLG is generated as 3_2 (WTFLG 3_2).
Since each writing method has been described with reference to
In a normal writing command or in an erasure priority mode, priority is given to a batch-erasing operation on an erasure area. Thus, after each of Step 110 to Step 115, it is determined in Step 116 whether batch-erasure in the erasure area is completed. In Step 116, when the batch-erasure in the erasure area is not completed, writing of data (write data wdata, ECC data, and writing flag WTFLG) generated in each of Step 110 to Step 115 is waited for. That is, in Step 117 after the batch-erasure in the erasure area is completed, the write data wdata, the ECC data, and the writing flag WTFLG are written into the write physical address NXPAD. The write data wdata is included as main data MDATA. The ECC data and the writing flag WTFLG are included in redundant data RDATA. The main data MDATA and the redundant data RDATA are respectively written into a main data area DArea and a redundant data RDATA in the physical address NXPAD.
The writing operation is performed, for example, on an erasure area where batch-erasure is performed. In this case, writing is not performed on a dummy chain memory array DCY arranged in a periphery (physically adjacent area) of the erasure area. Thus, it becomes possible to increase speed of the writing operation.
In
In
In this embodiment, an erasing operation is temporarily stopped. Thus, it becomes possible to reduce response time of the writing operation. Also, in this embodiment, in a case of performing a writing operation, the writing operation is not executed on a dummy chain memory array DCY arranged in a periphery (physically adjacent area) of an area to be an object of the writing operation (erasure area). Accordingly, it becomes possible to reduce the number of cells selected in the writing operation and to increase speed of the writing operation.
Also, an example of setting the memory module NVMMD0 to the writing priority mode has been described. However, alternatively, a writing priority command may be prepared as a command supplied to the memory module NVMMD0. In this case, the memory module NVMMD0 is configured in such a manner that the flow in
However, since the dynamic wear leveling is performed on physical addresses in the invalid state, there is a case where a difference between the number of times of erasure of the physical addresses in the invalid state and the number of times of erasure of physical addresses in a valid state is gradually increased as a whole. For example, when writing is performed at a certain logical address (physical address corresponding thereto) and the physical address becomes a valid state, in a case where a writing instruction is not generated with respect to the logical address (physical address corresponding thereto) for a long period after that, the physical address is excluded from an object of the wear leveling for a long period. Thus, as illustrated in
The information processing circuit MNGER performs the static leveling method of the number of times of erasure illustrated in
In Step 52, the information processing circuit MNGER sets a threshold DERCth for a difference between the number of times of erasure of the physical addresses in the invalid state and the number of times of erasure of the physical addresses in the valid state and compares the threshold DERCth with the difference in the number of times of erasure DIFF. When the difference in the number of times of erasure DIFF is larger than the threshold DERCth, the information processing circuit MNGER performs Step 53 for leveling of the number of times of erasure. When the difference is smaller, Step 58 is performed. In Step 58, the information processing circuit MNGER determines whether the physical segment table PSEGTBL1 or PSEGTBL2 is updated. When the update is performed, the difference in the number of times of erasure DIFF is calculated again in Step 51. When neither of the physical segment tables is updated, Step 58 is performed again.
In Step 53, the information processing circuit MNGER selects m physical addresses SPAD1 to SPADm in ascending order from the smallest number of times of erasure in the minimum number of times of erasure MNERC in the physical segment table PSEGTBL2 related to the valid physical address. In Step 54, the information processing circuit MNGER selects, as candidates, m physical addresses DPAD1 to DPADm in descending order from the largest number of times of erasure in the maximum number of times of erasure MXERC in the physical segment table PSEGTBL1 related to the invalid physical address.
In Step 55, the information processing circuit MNGER checks whether the physical addresses DPAD1 to DPADm, which are selected as the candidates, are registered in the write physical address table NXPADTBL. When any of the physical addresses DPAD1 to DPADm, which are selected as the candidates, is registered in the write physical address table NXPADTBL, the registered one of the physical addresses DPAD1 to DPADm is excluded from the candidates in Step 59 and supplementation of the candidate is performed in Step 54. When the selected physical addresses DPAD1 to DPADm are not registered in the written physical address table NXPADTBL, Step 56 is performed.
In Step 56, the information processing circuit MNGER moves data at the physical addresses SPAD1 to SPADm in the non-volatile memory device to the physical addresses DPAD1 to DPADm. In Step 57, the information processing circuit MNGER updates all tables to be updated due to movement of the data at the physical addresses SPAD1 to SPADm into the physical addresses DPAD1 to DPADm.
By utilization of such static wear leveling along with the dynamic wear leveling illustrated in
In buffer transfer operations WTBUF0, 1, 2, and 3 illustrated in
As illustrated in
In the interface circuit HOST_IF, N writing requests (WQ [1] to WQ [N]) generated in a period from time T0 to T2 are first transferred to the buffer BUF0 (WTBUF0). When it becomes impossible to store write data into the buffer BUF0, N writing requests (WQ [N+1] to WQ [2N]) generated in a period from time T2 to T 4 are transferred to the buffer BUF1 (WTBUF1). When it becomes impossible to store write data into the buffer BUF1, N writing requests (WQ [2N+1] to WQ [3N]) generated in a period from time T4 to T6 are transferred to the buffer BUF2 (WTBUF2). When it becomes impossible to store write data into the buffer BUF2, N writing requests (WQ [3N+1] to WQ [4N]) generated in a period from time T6 to T8 are transferred to the buffer BUF3 (WTBUF3).
In the period from time T1 to T3, the information processing circuit MNGER performs previous preparation (PREOP0) to write the write data stored in the buffer BUF0 into the non-volatile memory device NVM. Main operation contents of the previous preparation operation PREOP0 performed by the information processing circuit MNGER will be described in the following. Note that the other previous preparation operations PREOP1, 2, and 3 are operations similar to the previous preparation operation PREOP0.
(1) By utilization of a value of a logical address LAD included in the writing requests (WQ [1] to WQ [N]), a physical address PAD is read from the address conversion table LPTBL. When necessary, values of validity flags (CPVLD, PVLD, and DVF) of the physical address PAD are set to 0 and data is invalidated.
(2) The address conversion table LPTBL is updated.
(3) A write physical address NXPAD stored in the write physical address table NXPADTBL is read and the logical address LAD included in the writing requests (WQ [1] to WQ [N]) is assigned to the write physical address NXPAD.
(4) The physical segment table PSEGTBL is updated.
(5) The physical address table PADTBL is updated.
(6) The write physical address table NXPADTBL is updated for preparation for next writing.
Then, the information processing circuit MNGER writes the write data stored in the buffer BUF0 into the non-volatile memory device NVM in a period from time T3 to T5 (WTNVM0). In this case, a physical address of the non-volatile memory device NVM to which the data is written is identical to the value of the write physical address NXPAD in (3). The other data-writing operations WTNVM1, 2, and 3 are operations similar to the data-writing operation WTNVM0.
In
The plurality of chain memory arrays CY is two-dimensionally arranged in a matrix. In each row and each column in the matrix, a word line and a bit line are arranged and corresponding word line and bit line are connected to a chain memory array CY. In the drawing, a ∘-shape filled with dots (hereinafter, referred to as •-shape) indicates a chain memory array CY that does not store data (dummy chain memory array DCY) and a chain memory array CY of a ∘-shape indicates a chain memory array CY that stores data.
The plurality of chain memory arrays CY arranged in the matrix is divided into a plurality of areas during an erasing operation, a writing operation, and a reading operation based on dummy chain memory array designation information (XYDMC) stored in SSD configuration information (SDCFG). That is, when accessing the non-volatile memory devices NVM10 to NVM17 and performing the erasing operation, the writing operation, and the reading operation, the information processing circuit MNGER accesses these as a plurality of divided areas based on the dummy chain memory array designation information (XYDMC). An arrangement of s-shaped and ∘-shaped chain memory arrays CY in
Accordingly, a plurality of areas WT-AREA each of which includes 8 rows×64 columns of ∘-shaped chain memory arrays CY and 8 rows×2 columns of chain memory arrays CY is configured on a memory array. In this embodiment, the area is an area to be erased in the erasing operation. Thus, the area can be seen as an erasure area. As described with reference to
For example, when it is assumed that one chain cell array CY stores 1-byte data, data of 8×66=528 bytes can be written into one write area (erasure area). In this case, main data MDATA having 8×64=512 bytes is written into the main data area DArea and redundant data RDATA having 8×2=16 bytes is written into the redundant data area RArea. In this embodiment, the information processing circuit MNGER does not access a chain memory array DCY, which is set as a dummy chain memory array DCY, for the writing operation and the reading operation.
Next, a writing operation on the non-volatile memory device NVM10 in a case where writing requests WQ00, WQ01, WQ02, and WQ03 are serially input into the information processing circuit MNGER in
Here, it is assumed that the writing request WQ00 includes a logical address value LAD0, a writing instruction WRT, a sector count value SEC1, and 512-byte write data WDATA0. Also, it is assumed that the writing request WQ01 includes a logical address value LAD1, a writing instruction WRT, a sector count value SEC1, and 512-byte write data WDATA1. Similarly, it is assumed that the writing request WQ02 includes a logical address value LAD2, a writing instruction WRT, a sector count value SEC1, and 512-byte write data WDATA2 and the writing request WQ03 includes a logical address value LAD3, a writing instruction WRT, a sector count value SEC1, and 512-byte write data WDATA3.
First, the information processing circuit MNGER refers to the write physical address table NXPADTBL1 and determines physical addresses PAD0, 1, 2, and 3 respectively corresponding to logical addresses LAD0, 1, 2, and 3 and the non-volatile memory device NVM10 into which data is written. Then, the information processing circuit MNGER generates redundant data RDATA0, 1, 2, and 3 respectively corresponding to the write data WDATA0, 1, 2, and 3. Subsequently, the information processing circuit MNGER serially issues, to the non-volatile memory device NVM10, an erasure instruction ERS0, a writing instruction WT0, an erasure instruction ERS1, a writing instruction WT1, an erasure instruction ERS2, a writing instruction WT2, an erasure instruction ERS3, and a writing instruction WT3 through the arbitration circuit ARB and the memory control circuit NVCT0.
The erasure instruction ERS0 includes a physical address PAD0, an erasure instruction ERS, and a sector count value SEC. The writing instruction WT0 includes a physical address PAD0, a writing instruction WT, a sector count value SEC1, 512-byte write data WDATA0, and redundant data RDATA0. The erasure instruction ERS1 includes a physical address PAD1, an erasure instruction ERS, and a sector count value SEC1. The writing instruction WT1 includes a physical address PAD1, a writing instruction WT, a sector count value SEC1, 512-byte write data WDATA1, and redundant data RDATA1. The erasure instruction ERS2 includes a physical address PAD2, an erasure instruction ERS, and a sector count value SEC1. The writing instruction WT2 includes a physical address PAD2, a writing instruction WT, a sector count value SEC1, 512-byte write data WDATA2, and redundant data RDATA2. The erasure instruction ERS3 includes a physical address PAD3, an erasure instruction ERS, and a sector count value SEC1. Similarly, the writing instruction WT3 includes a physical address PAD3, a writing instruction WT, a sector count value SEC1, 512-byte write data WDATA3, and redundant data RDATA3.
A write area WRT-AREA0 of the memory device NVM10 is selected by the physical address PAD0 of the erasure instruction ERS0. By the erasure instruction ERS, data of all memory cells included in all chain memory arrays CY in the write area WRT-AREA0 becomes “1” (Set state). That is, batch-erasure is performed. Then, by the physical address PAD0 and the writing instruction WT in the writing instruction WT0, a write area WRT-AREA0 of the memory device NVM10 is selected. Only data of “0” (Reset state) in the 512-byte write data WDATA0 is written into a memory cell in a chain memory array CY in the main data area DArea and only data of “0” (Reset state) in the 16-byte redundant data RDATA0 is written into a memory cell in a chain memory array CY in the redundant data area RArea.
By the physical address PAD1 of the erasure instruction ERS1, the write area WRT-AREA1 of the memory device NVM10 is selected. By the erasure instruction ERS, data of all memory cells included in all chain memory arrays CY in the write area WRT-AREA1 becomes “1” (Set state) (batch-erasure). The write area WRT-AREA1 of the memory device NVM10 is selected by the physical address PAD1 and the writing instruction WT of the writing instruction WT1. Only data of “0” (in Reset state) in the 512-byte write data WDATA1 is written into a memory cell in the chain memory array CY in the main data area DArea and only data of “0” (Reset state) in the 16-byte redundant data RDATA1 is written into a memory cell in the chain memory array CY in the redundant data area RArea.
Between each of the write areas WT-AREA0 and 1, the write areas WRT-AREA1 and 2, and the write areas WT-AREA2 and 3, a dummy chain memory array DCY is arranged. Thus, for example, when batch-erasure is performed in the write area WRT-AREA1, the dummy chain memory array DCY becomes a buffer area for the heat disturbance and can reduce an influence of the heat disturbance on the data in the write area WT-AREA0 or the write area WT-AREA2. In such a manner, since there is a dummy chain memory array DCY between the write areas WT-AREA, it is possible to reduce an influence of the heat disturbance. Thus, it is possible to write and hold data reliably in the write areas WT-AREA and to provide a highly reliable memory module.
Although it is not specifically limited, when the dummy chain memory array designation information (XYDMC) is set to 1_1_1, for example, on a right side (in
Note that in this embodiment, since the 512-byte main data MDATA and the 16-byte redundant data RDATA are written into the non-volatile memory devices (NVM10 to NVM17), 8 rows×66 columns of chain memory arrays CY are arranged in each write area WT-AREA in such a manner that 528-byte data can be stored.
For example, in a case where the information processing device CPU_CP writes write data, with a minimum unit being 64 bytes, into the memory modules NVMMD0, the information processing circuit MNGER may arrange 8 rows×9 columns of chain memory arrays CY in each write area WT-AREA in such a manner that 72-byte data can be stored in order to write 64-byte main data MDATA and 8-byte redundant data RDATA into the non-volatile memory devices (NVM10 to NVM17).
In such a manner, the information processing device CPU_CP can arrange a chain memory array CY in a write area WT-AREA according to a minimum unit of intended write data and can flexibly correspond to a system request.
Moreover, by arranging a chain memory array CY in a write area WT-AREA according to 64 bytes that is a minimum unit of write data of the information processing device CPU_CP and by using a plurality of write areas WT-AREA, it is possible to store data (having, for example, 512 byte) that is larger than the minimum unit for the integer number of times.
When write data corresponding to one physical address is 512 bytes, in this embodiment, one physical address corresponds to one write area WT-AREA. However, it is obvious that this is not the limitation. In an embodiment described later, data corresponding to a plurality of physical addresses is written into a write area WT-AREA. That is, an example of a write area having a data capacity larger than a capacity of data with respect to one physical address is illustrated.
Also, each of the write areas WT-AREA and WT-AREA0 to 7 includes a main data area DArea where main data MDATA is written and a redundant data area RArea where redundant data RDATA is written. Also, the above-described main data area DArea is arranged in 8 rows×8 columns on an upper side of a matrix of chain memory arrays CY included in each write area and the above-described redundant data area RArea is arranged in 1 row×8 columns on a lower side thereof.
That is, information processing circuit MNGER performs a reading operation, a writing operation, and a batch-erasing operation on each of the write areas WT-AREA without performing the reading operation, the writing operation, and the erasing operation (batch-erasing operation) on one row and one column of chain memory arrays CY adjacent to an outer side of each of the write areas WT-AREA.
In
In such a manner, even in a case where batch-erasure is performed in any of the write areas WT-AREA, an influence of heat disturbance can be further reduced.
Also, it is possible to determine which chain memory array CY is arranged as a dummy chain memory array DCY by programming it into an initial setting area SSD configuration (SDCFG) in a non-volatile memory device. After power activation, the information processing circuit MNGER reads this initial setting area and determines arrangement of the dummy chain memory array DCY.
As described above, it is possible to flexibly correspond to levels of a function, performance, and reliability requested to a memory module NVMMD0.
In this embodiment, each of write areas WT-AREA (WT-AREA and WT-AREA 0 to 7) includes chain memory arrays CY which are arranged in 9 rows×65 columns and which include •-shapes and ∘-shapes. That is, each write area WT-AREA includes an erasure area ERS-AREA (not illustrated) and a set chain memory array DSCY. Here, the erasure area ERS-AREA includes 8 rows×64 columns of ∘-shaped chain memory arrays CY (in drawing, main data area DArea where main data MDATA is written and redundant data area RArea where redundant data RDATA is written). Since dummy chain memory array designation information (XYDMC) is set to 0_1_1, one dummy chain memory array DCY is set in each of a row direction and a column direction on an inner side of each write area WT-AREA. In other words, one row (one column) of dummy chain memory arrays DCY are arranged in a row direction (column direction) adjacent to an outer side (outer periphery) of the erasure area ERA-AREA.
Also, in this embodiment, in each of the write areas WT-AREA and WT-AREA0 to 7, a main data area DArea where main data MDATA is written is arranged in 7 rows×64 columns (on upper side in the drawing) of a matrix of chain memory arrays CY included in each write area and a redundant data area RArea where redundant data RDATA is written is arranged in 1 row×64 columns (on right side in the drawing).
Also, in this embodiment, each of the write areas WT-AREA and WT-AREA0 to 7 includes a plurality of physical addresses PAD. For example, the write area WT-AREA0 includes physical addresses PAD0 to PADm.
In the chain memory arrays CY in the erasure area ERS-AREA, batch-erasure is performed. The erasure area ERS-AREA is an area where data “0” (Reset state) is written at right time after the batch-erasure. The area includes a ∘-shaped chain memory arrays CY in 8 rows×64 columns. The write area WT-AREA is chain memory arrays CY in 9 rows×64 columns and the erasure area ERS-AREA is chain memory arrays CY in 8 rows×64 columns.
Here, when it is assumed that the write area WT-AREA includes chain memory arrays in 9 rows×65 columns and the erasure area ERS-AREA includes chain memory arrays in 8 rows×64 columns, a ratio of the erasure area ERA-AREA to the write area WT-AREAis (512/585)×100=87.5%. That is, when an amount of bit data “0” in data written into the memory array ARY is equal to or smaller than 87.5%, bit data of “0” can be written into the erasure area ERA-AREA.
A writing operation on the non-volatile memory device NVM10 in a case where a writing request WQ00 is input into the information processing circuit MNGER in
Although it is not specifically limited, the information processing circuit MNGER associates one physical address for each size of 512-byte main data MDATA and 16-byte redundant data RDATA and performs writing into the non-volatile memory devices NVM10 to NVM17. The writing request WQ00 includes a logical address value LAD0, a writing instruction WRT, a sector count value SEC1, and 512-byte write data WDATA0.
First, when the writing request WQ00 is input into the information processing circuit MNGER in
In such a manner, each bit of the write data (DATA0) is inverted or not inverted according to the number of pieces of bit data “0” in the 512-byte (512×8 bit) write data (DATA0). Thus, the number of pieces of the bit data “0” constantly becomes equal to or smaller than 2048 bits (=4096/2) in 512 bytes (512×8 bit=4096 bit). That is, the number of pieces of the bit data “0” is constantly equal to or smaller than ½ in the write data. Accordingly, an amount of written bit data “0” can be decreased by half.
Then, the information processing circuit MNGER refers to a write physical address table NXPADTBL1 and determines a physical address PAD0 and an erasure block address ERSAD0, which correspond to an address LAD0, and a non-volatile memory device NVM10 to which data is written (Step 306). The information processing circuit MNGER serially issues an erasure instruction ERS0 and a writing instruction WT0 with respect to the non-volatile memory device NVM10 through an arbitration circuit ARB and a memory control circuit NVCT0. The erasure instruction ERS0 includes the erasure block address ERSAD0, an erasure instruction ERS, and the erasure block address ERSAD0. By the erasure block address ERSAD0 in the erasure instruction ERS0, an erasure area ERS-AREA in the write area WT-AREA0 of the memory device NVM10 is selected. By the erasure instruction ERS, data in all memory cells included in all chain memory arrays CY in the erasure area ERS-AREA becomes “1” (Set state due to operation of batch-erasure) (Step 307). That is, by this erasing instruction, data in all memory cells in chain memory arrays CY assigned to a plurality of physical addresses PAD0 to PADm in the erasure area ERS-AREA becomes “1” (Set state due to operation of batch-erasure).
By the physical address PAD0 and the writing instruction WT in the writing instruction WT0, in the write area WT-AREA0 of the memory device NVM10, only data of “0” (Reset state) in the 512-byte write data WDATA0 is written into memory cells in chain memory arrays CY in a main data area DArea in an erasure area ERS-AREA assigned to the physical address PAD0 (Step 308).
Similarly, by the physical address PAD0 and the writing instruction WT in the writing instruction WT0, in the write area WT-AREA0 of the memory device NVM10, only data of “0” (Reset state) in the redundant data RDATA0 is written into memory cells in chain memory arrays CY in a redundant data area RArea in the erasure area ERS-AREA assigned to the physical address PAD0 (Step 308).
In such a manner, since the number of pieces of bit data “0” constantly becomes equal to or smaller than ½ in the write data DATA0. Thus, as illustrated in
Also, by setting a dummy chain memory array that is set on an inner side of the write area WT-AREA0 as a set chain memory array DSCY, it is possible to assume that the set chain memory array DSCY is an area storing bit data of “1” in the write data DATA0. That is, it is assumed that the bit data of “1” is stored at a physical address of the set chain memory array DSCY. Accordingly, it is possible to assume that the bit data of “1” in the write data DATA0 is stored in the set chain memory array DSCY. In this case, it is not requested to perform an operation of writing/reading data into/from the set chain memory array DSCY. For example, it is assumed that “1” is stored at the physical address of the set chain memory array DSCY.
Accordingly, it becomes possible to configure the write area WT-AREA0 with an erasure area ERS-AREA0 where bit data of “0” can be written and the set chain memory array DSCY and to realize the arrangement illustrated in
In
The set chain memory array DSCY can function as both of a buffer area that absorbs an influence of heat disturbance between the write areas WT-AREA adjacent to each other and a chain memory array CY that records the data “1.” Thus, it is possible to make a penalty of a storage capacity of the non-volatile memory zero, to write and hold data in the write area WT-AREA reliably without receiving an influence of the heat disturbance, and to provide a highly reliable memory module.
Moreover, the number of pieces of the bit data “0” constantly becomes equal to or smaller than ½ in the write data. Thus, it is possible to reduce an amount of written bit data “0” by half and to realize an SSD with high speed and low power.
Note that in
On an inner side of a write area WT-AREA, dummy chain memory arrays DCY in two lows and dummy chain memory arrays DCY in two columns are set. These dummy chain memory arrays DCY are used as a set chain memory array DSCY. In order to set such dummy chain memory arrays DCY, dummy chain memory array designation information XYDMC in an SSD configuration (SDCFG) is set to 0_2_2. Accordingly, on an inner side of each write area WT-AREA, dummy chain memory arrays DCY in two rows and two columns are set.
In each of write areas WT-AREA and WT-AREA0 to 7, the above-described write data is written into a main data area DArea (6 row×64 column). Also, in this embodiment, in 1 row×64 columns (on lower side in the drawing) in a matrix of chain memory arrays CY included in each write area, the above-described redundant data RDATA is arranged.
In this embodiment, a matrix of chain memory arrays CY that stores the redundant data RDATA includes a plurality of chain memory arrays CY arranged in one row.
In
Although it is not specifically limited, the write area WT-AREA includes chain memory arrays CY which are arranged in 9 rows×66 columns and which include •-shapes and ∘-shapes. The area includes an erasure area ERS-AREA and a set chain memory array DSCY.
Here, the erasure area ERS-AREA is an area where data “0” (Reset state) can be written after batch-erasure. The area includes ∘-shaped chain memory arrays CY in 7 rows×64 columns. Since the writing area WT-AREA is the chain memory arrays CY in 9 rows×66 columns and the erasure area ERS-AREA is the chain memory arrays CY in 7 rows×64 columns, a ratio of the erasure area ERS-AREA to the write area WT-AREA is (448/594)×100=75.4%.
In the writing method described with reference to
As described above, it is possible to arrange a dummy chain memory array DCY flexibly according to levels of a function, performance, and reliability requested to a memory module NVMMD0.
Although it is not specifically limited, each of write areas WT-AREA0 to WT-AREAn includes chain memory arrays CY which are arranged in 9 rows×4096 columns and which include •-shapes and ∘-shapes. Each of the areas includes an erasure area ERS-AREA and a set chain memory array DSCY. Also, to each of the write areas WT-AREA0 to WT-AREAn, a plurality of physical addresses (PAD0 to m) can be assigned. That is, it is possible to write a plurality of pieces of write data, each of which corresponds to a physical address, to one write area. For example, it is possible to associate 9 rows×512 columns to one physical address and to write a plurality of pieces of write data in a unit of 9 rows×512 columns.
Also, the erasure area ERS-AREA is an area where data “0” (Reset state) can be written after batch-erasure. The area includes ∘-shaped chain memory arrays CY in 8 rows×4096 columns.
In this embodiment, dummy chain memory array designation information XYDMC in SSD configuration (SDCFG) is set to 0_1_0. Accordingly, on an inner side of each write area WT-AREA (in other words, in outer periphery of erasure area ERS-AREA), one row of dummy chain memory arrays DCY is set and the set dummy chain memory arrays DCY are used as the set chain memory array DSCY.
Also, each of the write areas WT-AREA0 to n includes the above-described main data area DArea (chain memory array CY in 7 row×4096 column) and redundant data area RDATA (chain memory array CY having 1 row×4096 column).
In the arrangement example of this memory array ARY, each write area WT-AREA has 36864 chains (=9 row×4096 column). Among that, an area of the set chain memory array DSCY is 4106 chains (1 row). Thus, a ratio of an area where the data “0” (Reset state) can be written becomes 88.8% 8=((36864−4106)/36864)×100). In the writing method described in
Also, in this embodiment, it is possible to perform a writing operation for a plurality of times after batch-erasure is performed in the erasure area ERS-AREA in each write area WT-RAER. For example, it is possible to associate an erasure area ERS-AREA having 8 rows×512 columns to one physical address and to perform writing for a plurality of times in a unit of 8 rows×512 columns. When writing is performed for a plurality of times after the batch-erasure, it becomes possible to reduce an influence of heat disturbance, which is due to the batch-erasure, by a set chain memory array DSCY. Also, since no dummy chain memory array DCY is set in a column direction, it is possible to perform downsizing and to reduce a unit price in a memory cell (bit cost).
In
In
Also, to each of the write areas WT-AREA0 to WT-AREAn, a plurality of physical addresses (PAD0 to 3) can be assigned. That is, it is possible to write pieces of write data respectively corresponding to the physical addresses into one write area. For example, it is possible to associate 66 rows×8 columns to one physical address and to write a plurality of pieces of write data. Also, the erasure area ERS-AREA is an area where data “0” (Reset state) can be written after batch-erasure. The area includes ∘-shaped chain memory arrays CY in 63 rows×32 columns. In this embodiment, dummy chain memory array designation information XYDMC in an SSD configuration (SDCFG) is set to 0_3_0. Accordingly, on an inner side of each write area WT-AREA (in other words, in outer periphery of erasure area ERS-AREA), three rows of dummy chain memory arrays DCY are set. The set dummy chain memory arrays DCY are used as set chain memory arrays DSCY.
Also, each of the write areas WT-AREA0 to n includes the above-described main data area DArea (chain memory array CY in 61 row×32 column) and redundant data area RDATA (chain memory array CY in 2 row×32 column).
Although it is not specifically limited, in
In this embodiment, it is also possible to assign a plurality of physical addresses to each write area WT-AREA. The erasure area ERS-AREA is an area where data “0” (Reset state) can be written after batch-erasure. The area includes ∘-shaped chain memory arrays CY in 63 rows×32 columns. Also, dummy chain memory array designation information XYDMC in SSD configuration (SDCFG) is set to 0_3_0. On an inner side of each write area WT-AREA, three rows of dummy chain memory arrays DCY are set and arranged. The arranged three rows of dummy chain memory arrays DCY are used as set chain memory arrays DSCY.
A writing operation on a non-volatile memory device NVM10 in a case where writing requests WQ00, WQ01, WQ02, and WQ03 are serially input into the information processing circuit MNGER in
Logical address values LAD0, 1, 2, and 3 included in the writing requests WQ00, WQ01, WQ02, and WQ03 from the information processing device CPU_CP are stored into an address buffer ADDBUF and the write data WDATA0, 1, 2, and 3 is stored into the buffers BUF0 to BUF3 (
Then, the information processing circuit MNGER reads the write data WDATA0 from the buffer BUF0 (Step 401 in
Here, in the write area WT-AREA illustrated in
Next, when the data compression rate crate is equal to or lower than the allowable compression rate CpRate, the information processing circuit MNGER generates, for the physical address PAD0 of the non-volatile memory device NVM10 which address corresponds to the address LAD0, redundant data RDATA0 including ECC data based on compressed data CWDATA0 that is compressed write data WDATA0 (Step 404 in
Then, the information processing circuit MNGER reads write data WDATA1 corresponding to the physical address PAD1 from the buffer BUF1 and compresses the data in a similar procedure. Then, the information processing circuit MNGER writes only data of “0” (Reset state) in compressed data CWDATA1 into a memory cell of a chain memory array CY corresponding to the physical address PAD1 of the memory device NVM10.
Then, the information processing circuit MNGER reads the write data WDATA2 from the buffer BUF2 (Step 401), performs compression (Step 402), and creates compressed data CWDATA2. Here, for example, when it is determined that a data compression rate crate of the compressed data CWDATA2 is higher than an allowable data compression rate CpRate=0.95 (Step 403), an allowable compression rate CpRate corresponding to two physical addresses are newly calculated (Step 406 in
Then, the information processing circuit MNGER reads write data WDATA3 from the buffer BUF3 (Step 401), performs compression (Step 402), and creates compressed data CWDATA3. A data compression rate crate for a combination of the compressed data CWDATA2 and the compressed data CWDATA3 is determined in Step 403. In this case, it is determined that the rate is equal to or lower than 0.95 of an allowable compression rate CpRate (Step 403) and redundant data RDATA2 and RDATA2 including ECC data is generated based on the compressed data CWDATA2 and CWDATA3 of the physical addresses PAD2 and PAD3 (Step 404 in
Then, only data of “0” (Reset state) in the compressed data CWDATA2 and CWDATA3 is written into a memory cell in a chain memory array CY in a main data area DArea corresponding to the physical addresses PAD2 and PAD3 of the memory device NVM10 and the redundant data RDATA2 and RDATA3 is written into a memory cell in a chain memory array CY of the redundant area RArea (Step 405 in
In such a manner, when a data compression rate crate of data corresponding to one physical address is equal to or lower than an allowable compression rate CpRate with respect to the data corresponding to one physical address, compressed data of the one physical address is written. When a data compression rate crate of data corresponding to one physical address is higher than an allowable compression rate CpRate with respect to the data corresponding to the one physical address, it is possible to compress the data in such a manner the data compression rate becomes equal to or lower than the allowable compression rate CpRate by collective compression of data corresponding to a plurality of physical addresses. When compressed data of a plurality of physical addresses is collectively written, an influence of heat disturbance is constantly prevented. Thus, it is possible to secure an area of set chain memory arrays DSCY and to write data. Thus, it is possible to write and hold data in the non-volatile memory NVM reliably while reducing an influence of the heat disturbance and to provide a highly reliable memory module.
The writing flow illustrated in the drawing is similar to the writing flow illustrated in
In Step 507, each bit of the compressed write data is inverted. The inverted data is written into a write area WT-AREA designated by a physical address.
Also, when Step 508 is executed after the execution of Step 507, in Step 508, the data inverted in Step 507 is written into a main data area DArea in the write area WT-AREA designated by the physical address PAD. Also, the ECC data generated in Step 504 and a writing flag (WTFLG=2_1) indicating compression and inversion are written into a redundant area (RArea) corresponding to the write area WT-AREA designated by the physical address PAD. Also, in a case where Step 508 is executed without the execution of Step 507, in Step 508, the data inverted in Step 507 is written into the main data area DArea in the write area WT-AREA in the physical address PAD and the ECC data generated in Step 504 and a writing flag (WTFLG=2) indicating compression are written into the redundant area (RArea) corresponding to the write area WT-AREA designated by the physical address PAD.
In such a manner, the number of bits to which “0” is written (which is reset) can be reduced and speed of writing can be increased.
Also, to the ECC data generated in Step 504, Step 505 to Step 507 may be applied. In
In this embodiment, each write area WT-AREA includes chain memory arrays (∘-shape) which are arranged in 8 rows×8 columns and which are a main data area DArea to store main data MDTA and chain memory arrays (∘-shape) which are arranged in 1 row×8 columns and which are a redundant data area RArea to store redundant data RDATA. Thus, the write area (erasure area) includes chain memory arrays in 9 rows×8 columns.
In this embodiment, a column and a row of the dummy chain memory arrays DCY are arranged on a right side and an upper side of each write area WT-AREA (in
Also, the write area WT-AREA includes chain memory arrays (∘-shape) which are arranged in 8 rows×64 columns and which are a main data area DArea to store main data MDTA and a redundant data area RArea which has 1 row×64 columns and which stores redundant data RDATA.
After batch-erasure (setting) of all chain memory arrays CY in a write area WT-AREA (such as WT-AREA0) is performed, random writing is performed on the chain memory arrays CY in the write area WT-AREA (main data area DArea+redundant data area RArea=(8×64)+(1×64)=576 byte). Alternatively, writing is sequentially performed in a unit less than 512 bytes (such as 72=(64+8) byte). In this embodiment, a unit (576 byte) that is larger than the unit of sequential writing is set as an erasure area. Thus, it is possible to reduce the number of dummy chain memory arrays surrounding the erasure area and to realize downsizing and reduction of a bit cost.
Note that the sequential writing is performed, for example, with chain memory arrays CY arranged in 9 rows×8 columns as one unit. In the drawing, this unit is surrounded by a narrow line. The sequential writing is performed by performance of writing in this unit from a left side to a right side in the drawing.
After batch-erasure (setting) of all chain memory arrays CY in a write area WT-AREA (such as WT-AREA0) is performed, writing is performed on the chain memory arrays CY in the write area WT-AREA (main data area DArea+redundant data area RArea=(8×512)+(1×512)=4608 byte).
In a case where the write area WT-AREA includes a plurality of physical addresses PAD0 to PADm with 576 (=512+64) that is a unit smaller than 4608 bytes as one physical address PAD, a control circuit MDLCT0 sequentially assigns the physical addresses PAD0 to PADm in the write area WT-AREA to logical addresses LAD (for example, one physical address has data size of 512 byte) that are randomly input into a control circuit MDLCT0 by an information processing device CPU_CP. Then, the control circuit MDLCT0 performs writing. For example, the writing is performed from a left side to a right side in the drawing.
Also, in a case where one write area WT-AREA corresponds to one physical address, the control circuit MDLCT0 assigns a physical address PAD in the write area WT-AREA to a logical address LAD (for example, one physical address has data size of 4096 byte) randomly input into the control circuit MDLCT0 by the information processing device CPU_CP. The control circuit MDLCT0 sequentially performs writing of chain memory arrays CY at the physical address PAD in a unit of 576 (=512+64) bytes. For example, the writing is performed from a left side to a right side in the drawing.
In this embodiment, a unit (such as 4608 byte) that is larger than a unit of sequential writing is set as an erasure area. Thus, it becomes possible to reduce the number of dummy chain memory arrays surrounding the erasure area and to realize downsizing and reduction of a bit cost.
After batch-erasure (setting) of all chain memory arrays CY in a write area WT-AREA (such as WT-AREA0) is performed, writing is performed on the chain memory arrays CY in the write area WT-AREA (main data area DArea+redundant data area RArea=(8×4096)+(1×4096)=36864 byte).
Also, when the write area WT-AREA includes a plurality of physical addresses PAD0 to PADm with 576 (=512+64) or 4608 (=8×512+1×512), which is a unit smaller than 36864 bytes, as one physical address PAD, a control circuit MDLCT0 sequentially assigns the physical addresses PAD0 to PADm in the write area WT-AREA to logical addresses LAD (one physical address has data size of 512 byte or 4096 byte) randomly input into the control circuit MDLCT0 by an information processing device CPU_CP. Then, the control circuit MDLCT0 performs writing. For example, the writing is performed from a left side to a right side in the drawing.
Also, when one write area WT-AREA corresponds to one physical address, the control circuit MDLCT0 assigns the physical address PAD in the write area WT-AREA to a logical address LAD (for example, one physical address has data size of 32768 byte) randomly input into the control circuit MDLCT0 by the information processing device CPU_CP. Then, the control circuit MDLCT0 performs sequential writing of chain memory arrays CY at the physical address PAD in a unit of 576 (=512+64) bytes. For example, the writing is performed from a left side to a right side in the drawing.
In this embodiment, a unit (36864 byte) that is larger than a unit of sequential writing is set as an erasure area. Thus, it is possible to reduce the number of dummy chain memory arrays surrounding the erasure area and to realize downsizing and reduction of a bit cost. Also, in the drawing, since no adjacent write area is arranged on each of right and left sides, it is not necessary to provide dummy chain memory arrays DCY in a column and it becomes possible to realize further downsizing.
After batch-erasure (setting) of all chain memory arrays CY in a write area WT-AREA (such as WT-AREA0) is performed, writing is performed on the chain memory arrays CY in the write area WT-AREA (main data area DArea+redundant data area RArea=(8×512)×8+(8×16)×8=4096×8+128×8=32768+1024=33792 byte).
Also, when each write area WT-AREA includes a plurality of physical addresses PAD0 to PADm with 528 (=512+16) or 4224 (=8×512+8×16), which is a unit smaller than 333792 bytes, as one physical address PAD, a control circuit MDLCT0 sequentially assigns the physical addresses PAD0 to PADm in the write area WT-AREA to logical addresses LAD (one physical address has data size of 512 byte or 4096 byte) randomly input into the control circuit MDLCT0 by an information processing device CPU_CP. Then, the control circuit MDLCT0 performs writing. For example, the writing is performed from a left side to a right side in the drawing.
Also, when one write area WT-AREA corresponds to one physical address, the control circuit MDLCT0 assigns a physical address PAD in the write area WT-AREA to a logical address LAD (for example, one physical address has data size of 32768 byte) randomly input into the control circuit MDLCT0 by the information processing device CPU_CP. Then, the control circuit MDLCT0 sequentially performs writing of chain memory arrays CY at the physical address PAD in a unit of 528 (=512+16) bytes. For example, the writing is performed from a left side to a right side in the drawing.
In this embodiment, a unit (333792 byte) that is larger than a unit of sequential writing is set as an erasure area. Thus, it is possible to reduce the number of dummy chain memory arrays surrounding the erasure area and to realize downsizing and reduction of a bit cost. Also, since no adjacent write area is arranged on each of right and left sides, it is not necessary to provide dummy chain memory arrays DCY in a column and it becomes possible to realize further downsizing.
In the above-described embodiment, each of the dummy chain memory arrays DCY is preferably set to a set state in previous. This can be set, for example, in initial setting. However, instead of the setting in the initial setting, the following may be performed. That is, until access to all physical addresses PAD in the non-volatile memory device is completed, for example, physically adjacent and continuous write areas WT-AREA and physically adjacent and continuous physical addresses PAD in the write areas WT-AREA are selected with respect to a logical address LAD input into the control circuit MDLCT0 by the information processing device CPU_CP. Then, in writing of data into the physical addresses PAD for the first time, a writing operation may be performed in such a manner that an erasing operation is serially executed in the write areas WT-AREA including a dummy chain memory array DCY. In this case, in and after second performance of the writing operation on the same physical addresses PAD, the erasing operation and the writing operation are performed in an area not including the dummy chain memory array DCY. In this case, the write area WT-AREA may be selected randomly.
An example of a detail of the above writing method is illustrated in
First, in Step 601, an input of a writing request into the information processing circuit MNGER in
In Step 603, the write area WT-AREA [i] is selected. In Step 604, it is checked whether the number of times of erasure in an erasure area ERS-AREA in the write area WT-AREA [i] is equal or smaller than 0. When the number of times of erasure is equal to or smaller than 0, Step 605 is performed. In other cases, Step 609 is performed.
In Step 605, a chain memory array CY in the erasure area ERS-AREA in the writing area WT-AREA [i] and all memory cells in a dummy chain memory array DCY arranged on the outer side of the erasure area ERS-AREA is set to an erased state (Set state).
In Step 606, physical addresses PAD that are arranged in a physically adjacent manner in the write area WT-AREA [i] are serially selected. Written data (MDATA) is written into a main data area DArea included in the selected physical address PAD and redundant data RDATA is written into a redundant data area RArea.
In Step 607, it is checked whether writing is performed on all physical addresses PAD included in the write area WT-AREA [i]. When the writing is performed on all of the physical addresses PAD included in the write area WT-AREA [i], Step 608 is performed. In other cases, Step 601 is performed. In Step 608, a new value of i which is a value of i to which 1 is added is determined. Since values of i are serially determined, write areas WT-AREA arranged in a physically adjacent manner are selected. After Step 608 is over, Step 601 is performed.
In Step 609, a writing access to all write areas WT-AREA (physical address PAD) of the non-volatile memory devices NVM10 to 17 of the memory module NVMD0 is completed once and all memory cells of all dummy chain memory arrays DCY included in the non-volatile memory devices NVM10 to 17 become the erased state (Set state). Thus, as described in the above, in and after second performance of the writing operation, the erasing operation and the writing operation are performed on an area not including the dummy chain memory arrays DCY. In this case, the write area WT-AREA can be selected randomly.
In
In Step 603, the write area WT-AREA0 is selected. In Step 605, all memory cells in a chain memory array CY in an erasure area ERS-AREA in the write area WT-AREA0 and all memory cells in a dummy chain memory array DCY arranged in one row on an outer side of the erasure area ERS-AREA are set to the erased state (Set state). In a case of
Next, in Step 606, writing is serially performed from the physical address PAD0 to PAD7 in the write area WT-AREA0. For example, the writing is performed from a left side to a right side in the drawing.
When writing into the physical addresses PAD0 to 7 in the write area WT-AREA0 is over, 1 is added to the value of i in Step 606 and i=1 (=0+1). Accordingly, for next writing, a write area WT-AREA1 that is adjacent to the write area WT-AREA0 is selected and a similar writing operation is repeated.
A case where the write area WT-AREA includes a plurality of physical addresses PAD has been described. However, it is obvious that a similar operation can be performed in a case where a write area WT-AREA includes only one physical address PAD.
By such a writing method, it is possible to bring a dummy chain memory array DCY into a set state during writing of data. Time of bringing a dummy chain memory array DCY into the set state is not necessary in initial setting. Thus, it is possible to reduce time of initial setting and to use the memory module NVMMD0 instantly.
A major effect acquired by each of the above-described embodiments is as follows.
First, it is possible to simultaneously make memory cells in a plurality of chain memory arrays CY low resistive and to improve a data erasing rate. Second, since only data “0” is written into a memory cell after erasure of a chain memory array CY, writing speed can be increased. Third, a stable writing operation can be realized since a system of writing, after simultaneously writing one of a set state and a reset state once into all memory cells in a chain memory array CY (after erasure), the other state into a specific memory cell is used. Fourth, since there is a dummy chain memory array DCY between write areas WT-AREA, it is possible to write and hold data reliably in a write area WT-AREA without an influence of heat disturbance and to provide a highly reliable memory module. Fifth, it is possible to program, into an initial setting area in a non-volatile memory device, how to arrange a dummy chain memory array DCY. Thus, it is possible to flexibly correspond to levels of a function, performance, and reliability requested to a memory module NVMMD0.
Sixth, when the number of pieces of bit data “0” is larger than the number of pieces of bit data “1” in written data (DATA0), the number of pieces of bit data “0” constantly becomes equal to or smaller than ½ by inversion of each bit of the write data. Accordingly, it is possible to reduce an amount of written bit data “0” by half and to realize a memory module with low power and high speed. Seventh, since a set chain memory array DSCY can function as both of a buffer area that absorbs an influence of heat disturbance between write areas WT-AREA and a memory array that stores data “1,” it is possible to reduce an influence of heat disturbance and to write and hold data reliably in the write areas WT-AREA while preventing an increase in a penalty of a storage capacity of a non-volatile memory. Thus, it is possible to provide a highly reliable memory module. Eighth, it is possible to program, into an initial setting area in a non-volatile memory device, how to arrange a set chain memory array DSCY. Thus, it is possible to flexibly correspond to levels of a function, performance, and reliability requested to a memory module NVMMD0. Ninth, since it is possible to secure a set chain memory array DSCY by compression of data, it is possible to write and hold data reliably in a write area WT-AREA while reducing an influence of heat disturbance and to provide a highly reliable memory module. Tenth, as described with reference to
In the above, the invention made by the inventors has been described based on embodiments. However, the present invention is not limited to the above embodiments and can be modified in various manners within the spirit and the scope thereof. For example, the above embodiments are described in detail in order to make it easy to understand the present invention. The present invention is not necessarily limited to what includes all described configurations. Also, it is possible to replace a part of a configuration of an embodiment with a configuration of a different embodiment and to add a configuration of a different embodiment to a configuration of an embodiment. Moreover, addition/deletion/replacement of a different configuration can be performed with respect to a part of a configuration of each embodiment. Also, in each of the embodiments, a description is mainly made with a phase-change memory as a representative. However, it is possible to apply the present invention in a similar manner and to acquire a similar effect as long as a memory is a resistive random access memory including a ReRAM and the like.
Also, in the embodiments, a description is made with a memory, which has a three-dimensional structure and in which a plurality of memory cells is arranged in a manner serially laminated in a height direction with respect to a semiconductor substrate, as a representative. However, it is possible to apply the present invention in a similar manner and to acquire a similar effect in a memory which has a two-dimensional structure and in which one memory cell is arranged in a height direction with respect to a semiconductor substrate.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/078925 | 10/25/2013 | WO | 00 |