This invention relates generally to computer memory and more particularly, to heterogeneous recovery in a redundant memory system.
Redundant array of independent memory (RAIM) systems have been developed to improve performance and/or to increase the availability of storage systems. RAIM distributes data across several independent memory channels (e.g., made up of memory modules each containing one or more memory devices). There are many different RAIM schemes that have been developed each having different characteristics, and different pros and cons associated with them. Performance, availability, and utilization/efficiency (the percentage of the disks that actually hold customer data) are perhaps the most important. The tradeoffs associated with various schemes have to be carefully considered because improvements in one attribute can often result in reductions in another.
With the movement in high speed memory systems towards the use of differential drivers, the number of logical bus wires has been effectively cut in half. This makes the use of error correction code (ECC) protection across multiple channels of a memory more expensive as the use of ECC causes an either further reduction in the number of bits of data that are transferred in each packet or frame across the channel. An alternative is the use of CRC on channel busses to detect errors. However, since CRC is detectable but not correctable at the bus-level, soft or hard errors detected on the busses require a retry of the failing operations at the bus level. Typically, this means retrying fetches and retrying stores to memory.
For stores, the buffers containing the store data merely have to hold the data until it is certain that the data has been stored. The store commands and data can be resent to the memory interface.
For fetches, the line of data can merely be refetched from memory. However, consideration has to be given to the various recovery scenarios. For instance, if a double line of data (e.g., 256 bytes) is required from memory but ECC is only across a quarter of a line (e.g., 64 bytes), consideration must be given to the error scenarios. If the error occurs on the first 64 bytes, the data can be refetched and the entire 256 byte line can be delayed by the recovery time. However, if there is no error until the third quarter line is fetched, a decision has to be made about how to handle the first half of the line. For latency reasons, it may be advantageous to send the quarter lines as they are fetched. However, this means that any error on a quarter line will cause a gap while waiting for that quarter line. If the hardware does not have separate address/protocol tags for each quarter line, then there will be gaps on the fetch data, and the system may not be designed to handle gaps on the fetch data. One approach to avoid the gaps is delay the entire line until all the ECC is clean. A drawback to this approach is that it would cause undue latency on the line that would have to be incurred on all lines, not just those with errors.
Some recovery steps (e.g., data recalibration and clock recalibration) may take so long that they could hang system operations. For instance, the time it takes to spare a clock lane, recalibrate, and lock a phase locked loop (PLL) could be on the order of 10 ms. However, some systems can only tolerate hang periods of less than 1 ms.
Accordingly, and while existing techniques for dealing with recovery in a memory system may be suitable for their intended purpose, there remains a need in the art for error recovery schemes in a memory system that overcome these drawbacks of introducing fetch gaps while also avoiding additional latency caused by speculation in the recovery of errors and also avoiding system hangs introduced during long recovery sequences.
An embodiment is a memory system that includes a memory controller, a plurality of memory channels in communication with the memory controller, an error detection code mechanism configured for detecting a failing memory channel in the plurality of memory channels, and an error recovery mechanism. The error recovery mechanism is configured for receiving notification of the failing memory channel, for performing a recovery operation on the failing memory channel while other memory channels are performing normal system operations, for bringing the recovered channel back into operational mode with the other memory channels for store operations, for continuing to mark the recovered channel to guard against stale data, for removing any stale data after the recovery operation is complete, and for removing the mark on the recovered channel to allow the normal system operations with all of the memory channels, the removing based on the removing any stale data being complete.
Another embodiment is a computer program product for performing recovery. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes receiving a notification that a memory channel has failed, the memory channel one of a plurality of memory channels in a memory system. A recovery action is performed on the failing memory channel while other memory channels are performing normal system operations. Any stale data is removed in response to the recovery operation being completed, the removing while the other memory channels are performing normal system operations. Normal system operations are continued with all of the memory channels based on the removing any stale data being completed.
Referring now to the drawings wherein like elements are numbered alike in the several FIGURES:
An embodiment of the present invention provides a memory redundant array of independent memory (RAIM) tiered error correction code (ECC)/cyclical redundancy code (CRC) heterogeneous recovery system. An embodiment of a first tier of recovery includes a five channel reset followed by an operation retry. An embodiment of a second tier is performed for one or more of the channels (e.g., just the failing channel) and recovery includes data recalibration with lane repair, reset, and then an operation retry. An embodiment of a third tier of recovery is performed for just the failing channel and includes clock recalibration with lane repair, data recalibration with lane repair, reset, and then an operation retry. In an embodiment, while one channel is in a recovery tier (e.g. performing a calibration) the other channels continue to run (e.g., fetches and stores and normal customer traffic), thus time is not a critical factor in the recovery process.
An embodiment makes use of RAIM ECC so that one channel can be entirely ignored during a long period of time. It also makes use of fetch and store channel marks to help control that fencing off of the problem channel, while keeping reliability high. That isolated channel can then run a sequence of operations to calibrate and self-heal the interface. Since the other channels continue to run, time is no longer a factor.
An embodiment also provides for a means to reset the store mark (so operations can be sent again to the recovering interface) and to sync-up the 5 channels back together. This is an important aspect because there are lock-step timing controls that must make all five channels have the same latency. An embodiment provides a means to update the channel data through a fast scrub so the stale data introduced by the recovery process described herein goes away. After the fast scrub, the fetch mark can be reset.
An embodiment of the first tier of recovery, referred to herein as a “tier one recovery process” allows for gapless fetches by using a unique guard feature. This tier also allows for fast reset of all five channels while keeping dynamic random access memories (DRAMs) in a self-timed refresh state to keep from losing data. This tier also allows for the reset of some soft errors in the memory subsystem. Stores are retried to make sure that any questionable stores were redone properly.
An embodiment of the second tier of recovery, referred to herein as a “tier two recovery process” is performed when there are still errors occurring after a tier one recovery process has been performed. In the case where there are still errors occurring after a tier one recovery process attempt, eventually the hardware performs a tier two recovery process. This involves retraining one channel or all five channels for timing calibration as well as an automatic data lane repair for any solid or high frequency of bus lane errors. After the repair of these data lanes, the hardware retries any stores that were outstanding.
An embodiment of the third tier of recovery, referred to herein as a “tier three recovery process” is performed in the case where there is a clock error. This process allows for the recalibration and/or sparing of a clock differential from a primary clock to a secondary clock. Since this tier takes a relatively long time (e.g., about ten milliseconds), this tier is performed as a last resort. An embodiment of the tier three recovery process includes a self-repair of clock channel errors and a clock recalibration on the failing channel. In an embodiment, the tier three recovery process includes the single channel clock recalibration (while the other four channels are operational), five-channel data recalibration with lane repair, reset, then operation retry. If the recovery is successful, the memory system is operating normally, but there is a post tier three scrub that is needed to clean up the stale channel data. An embodiment makes use of a fetch mark as well as a store mark to block storing anything to the channel during the tier three recovery process. It should be noted that while the failing channel is offline in the tier three recovery process, the other four channels can also be performing tier one or tier two recovery. A channel checkstop is performed to permanently degrade a channel that cannot recover.
An embodiment also includes programmable timers and counters to assist with the forward progress and sequences, and that can be used to drive proper behavior of the tier one, two and three recovery processes.
An embodiment also includes programmable hang counters for tier one, tier two, and tier three which allow detection of a channel problem during recovery such that a problem channel that hangs can be taken offline while the remaining channels are allowed to continue to run.
An embodiment of the present invention makes use of a RAIM system with five memory channels with RAIM ECC across the five channels and CRCs within each channel. During normal operation, data are stored into all five channels and data are fetched from all five channels. In an embodiment, CRC is used to check the local channel interfaces between a memory controller and cascaded memory modules.
In an embodiment, there is a fetch channel mark that is used to decode the fetch data with a mark RAIM scheme such as the one described in commonly assigned U.S. patent application Ser. No. 12/822,503, entitled “Error Correction and Detection in a Redundant Memory System” filed on Jun. 24, 2010, which is incorporated by reference herein in its entirety. This mark can be set statically (at boot time), after a degrade, for other recovery events, as well as when there is a CRC error present on the channel. In the case of fetch data, if a CRC error is detected on the fetch (upstream), the CRC error is used to mark the channel, thus allowing better protection/correction of the fetch data.
In an embodiment, store data are stored to all channels. When there is a CRC error present on the channel (either from a data fetch or a data store), an embodiment begins the recovery process described herein.
As used herein, the term “memory channel” refers to a logical entity that is attached to a memory controller and which connects and communicates to registers, memory buffers and memory devices. Thus, for example, in a cascaded memory module configuration a memory channel would comprise the connection means from a memory controller to a first memory module, the connection means from the first memory module to a second memory module, and all intermediate memory buffers, etc. As used herein, the term “channel failure” refers to any event that can result in corrupted data appearing in the interface of a memory controller to the memory channel. This failure could be, for example, in a communication bus (e.g., electrical, and optical) or in a device that is used as an intermediate medium for buffering data to be conveyed from memory devices through a communication bus, such as a memory hub device. The CRC referred to herein is calculated for data retrieved from the memory chips (also referred to herein as memory devices) and checked at the memory controller. In the case that the check does not pass, it is then known that a channel failure has occurred. An exemplary embodiment described herein applies to both the settings in which a memory buffer or hub device that computes the CRC is incorporated physically in a memory module as well as to configurations in which the memory buffer or hub device is incorporated to the system outside of the memory module.
As shown in the embodiment depicted in
Each memory interface bus 106 in the embodiment depicted in
As used herein, the term “RAIM” refers to redundant arrays of independent memory modules (e.g., dual in-line memory modules or “DIMMs). In a RAIM system, if one of the memory channels fails (e.g, a memory module in the channel), the redundancy allows the memory system to use data from one or more of the other memory channels to reconstruct the data stored on the memory module(s) in the failing channel. The reconstruction is also referred to as error correction. As used herein, the terms “RAIM” and “redundant arrays of independent disk” or “RAID” are used interchangeably.
In an exemplary embodiment, the memory system depicted in
As used herein, the term “mark” refers to an indication given to an ECC that a particular symbol or set of symbols of a read word are suspected to be faulty. The ECC can then use this information to enhance its error correction properties.
As used herein, the term “correctable error” or “CE” refers to an error that can be corrected while the system is operational, and thus a CE does not cause a system outage. As used herein, the term “uncorrectable error” or “UE” refers to an error that cannot be corrected while the memory system is operational, and thus correction of a UE causes the memory system to be off-line for some period of time while the cause of the UE is being corrected (e.g., by replacing a memory device, by replacing a memory module, recalibrating and interface).
In an embodiment, if there are multiple channel errors, the data will be decoded as a UE and the data must be flagged with a special UE (SPUE) in order for the processor to treat this data as unusable. In an embodiment, if there are transient CRC errors present (e.g. when one channel is marked and another channel has CRC errors), a unique SPUE flag is set to distinguish this ‘transient’ UE condition from a ‘permanent’ UE condition. The effect of the transient SPUE is that the processor can retry the fetch and get correctable data once the CRC error is done. The permanent SPUE will indicate that the memory UE will persist and the operating system can be notified that the line or page of data is no longer usable (even if there were additional recovery attempts).
Output from the CRC checkers 210 are the channel data 202 that includes data and ECC bits that were generated by an ECC generator. The channel data 202 are input to RAIM ECC decoder logic 204 where channel data 202 are analyzed for errors which may be detected and corrected using the RAIM ECC and the temporary CRC marking on a failing channel (if a failing channel is detected by any of the CRC checkers 210). Output from the RAIM ECC decoder logic 204 are the corrected data 206 (in this example 64 bytes of corrected data) and an ECC status 208. If CRC errors were detected by CRC checkers 210, then recovery logic 212 is invoked to recover any outstanding stores and to repair any downstream bus 104 and upstream bus 108 lanes. In an exemplary embodiment, the recovery logic 212 performs a retry of stores and/or fetches where errors have been identified. Exemplary embodiments provide the ability to have soft errors (e.g., temporarily incorrect data on good memory devices), hard errors (e.g. permanently damaged memory devices), and also channel failures or other internal errors without getting UEs.
Next, the memory controller 110 waits for previous operations (e.g., pending stores and fetches) to complete 308. In an embodiment, the channel having the error is shut down and any pending stores or fetches to memory devices in the failing channel are ignored, and only pending operations to the other four memory channels are completed. Because the four non-failing channels are allowed to run, particularly for pending fetches, the RAIM ECC and decoder logic 204 is able to correct any missing fetch data from the failing channel and provide gapless fetch data back to the system from the memory subsystem. Therefore, these steps of halting new operations 306 and waiting for previous operations 308 allows for gapless fetches without retry and without additional latency.
Next, the memory controller 110 sends a downstream poison CRC to all five channels 310. In an embodiment the poison CRC initiates a recovery scheme that helps to clear out channel errors and puts DRAMS (or other memory devices) into a self-timed refresh (STR) state. The memory controller 110 also sends an error acknowledgement 312 and waits about 550 cycles (number of cycles is programmable and is implementation and/or technology specific) 314 that initiates a recovery scheme to exit the error state and prepare channels to be brought back online. Waiting a pre-specified number of cycles allows all of the memory devices to be put into STR.
In an embodiment, sending the error acknowledgement 312 resets buffers and control logic in an attempt to repair soft errors that are present in some of these devices.
Next, the memory devices exit STR and enter a power down state 316 to prepare the channels to be sent a read/write (also referred to herein as a fetches and stores). At this point the memory controller 110 retries stores and any other pending operations 318 that were issued prior to the error. In an embodiment, the fetches are not retried because they were properly corrected through RAIM and don't need to be retried. The fence is removed and the memory devices enter a normal state (or a power down state) 320. In addition, the memory system enters a normal processing state with the new stores and fetches being executed.
An embodiment of the tier one recovery process 300 clears out errors from either soft interface failures or even from soft error upsets (e.g. latches). An embodiment includes logic that can detect latch errors within a channel (e.g. on the memory module buffer device) and force CRC errors in order to allow this recovery process to reset those soft errors.
Some of the above steps in the tier one recovery process 300 may be skipped for some channels. For example, the memory controller 110 may only send a downstream poison CRC and/or an error acknowledgement to the channel where the error was detected in if the overall tier one recovery process time is short enough that refresh is not skipped. For instance, if the next refresh is due in 100 ns but there is a guarantee that a quick, single-channel tier one completes in 50 ns, there may not be a need to put all five channels into self-timed reset state (STR). In an embodiment, the tier one recovery process 300 is performed on all five channels together.
If there is a hard data or clock error or even an intermittent error, the tier one recovery process 300 may not be enough to correct the error and the interface may keep failing. There is forward progress logic (programmable) that monitors whether the mainstream logic is getting processed or whether more CRC recovery events are occurring too closely together.
When enough forward progress is not being made (e.g., based on a programmable forward progress window), a tier two recovery process 400, such as that depicted in
During the tier two recovery process 400, there is not only a quiesce and reset of the channels, but there is also a data self-heal step that attempts to repair data lanes that are in error by sparing them out to spare bus lanes.
An embodiment of the tier two recovery process 400 includes the tier one recovery process 300 with some additional processing 402. An embodiment of the tier two recovery process 400 runs through the same steps as the tier one recovery process 300 described previously through waiting 550 cycles 314. After waiting 550 cycles 314, the tier two recovery process 400 performs a tier two fast initiation 406. These steps can also be referred to as training state two (TS2) through training state seven (TS7). During these steps all of the lanes in all of the channels are retrained and checked, and any problem lanes that are detected after training are repaired (e.g., using spare lanes). This is a self-heal procedure for data that will calibrate data downstream and upstream across the channels (e.g., across the cascaded DIMMs and memory controller 110). When completed, any solid or high frequency data failures that can be repaired will be self-repaired. In another embodiment, only those lanes in the failing channel are retrained and checked while the other channels are idle. In an embodiment, the step of sending error acknowledgement 312 is skipped when running a tier two fast initialization (TS2-TS7) 406. Processing then continues by determining if there is still a problem with a channel 404. This could be caused by a variety of reasons, such as, but not limited to, having more lanes fail than are available as spare lanes.
If there is still a problem with a channel, then the bad channel is degraded 408. In an embodiment, the memory controller 110 is notified of the failing channel. The other four channels then continue with exiting STR and entering power down 316. If all of the channels are working properly, then all five of the channels exit STR and enter power down 316. At this point the memory controller 110 retries stores and any other pending operations 318 that were issued prior to the error. In an embodiment, the fetches are not retried because they were properly corrected through RAIM and don't need to be retried. The fence is removed and the memory devices enter a normal state (or a power down state) 320. In addition, the memory system enters a normal processing state with the new stores and fetches being executed.
In an embodiment, steps in the tier two recovery process 400 may be repeated or not repeated (programmable). In embodiment, “x” number of tier one recovery attempts are made before forcing a tier two recovery event. In another embodiment, “x” number of tier three recovery processes are performed before forcing a tier one recovery process (or a degrade). In an embodiment, a setting of “311” would run three tier one processes, one tier two process, and one tier three process for a solid clock error. These are just examples, other scenarios may also be implemented.
If the hardware continues to detect problems with forward progress and the tier two recovery process 400 did not repair the problem, then the problem is likely related to a clock. In this case a tier three recovery process 500 such as that depicted in
In an embodiment, the single channel goes through a TS0 (clock retraining) 502 which includes calibration and, if necessary, self-repair of the primary clock to an alternate clock lane. An embodiment has a lane that can spare a clock or data wire and another lane that is dedicated for a second data lane. After the lengthy part of the recovery on the single channel is done (in this case TS0), the other four channels are quiesced of traffic. The store mark is reset so operations can then be sent to all five channels.
All five channels are then calibrated and re-aligned for wire delay and other latencies 506 to bring them back to a lock-step environment (e.g., a tier two recovery process 400 equivalent). All five channels are moved back into normal operation 320 via moving memory devices from STR into power down 316 and the memory controller 110 retrying and resending stores 318. At this point the memory controller 110 retries stores and any other pending operations 318 that were issued prior to the error. In an embodiment, the fetches are not retried because they were properly corrected through RAIM and don't need to be retried. The fence is removed and the memory devices enter a normal state (or a power down state) 320. In addition, the memory system enters a normal processing state with the new stores and fetches being executed.
The single, bad channel remains marked for fetching. This is because that channel can have stale data that was caused by four channels being written with new data while that channel was in the tier three recovery process 500. The memory controller 110 kicks off a post tier three recovery fast scrub 508 which reads data from all channels (although, with the fetch channel mark, the recovering channel gets corrected) and writes corrected data back to all five channels. This will clean up all the stale data in the recovering channel. An embodiment can run this fast scrub 508 using code or hardware or both code and hardware.
Once post tier three recovery process fast scrub 508 is complete, the fetch channel mark is removed. Processing then continues at normal operation 320 unless there is a subsequent channel error (i.e., the channel error was not corrected). If there is a subsequent channel error, then an error report or alert is sent about the channel degrade 510 and the system continues with normal operation 320 with one degraded channel. In an embodiment, a mark is put on the bad channel and fetches ignore that channel. This is considered a RAIM degrade mode because full channel failures on top of the marked channel cannot be corrected. In an embodiment stores are also blocked from this channel to save power. Turning to
An embodiment of a forward progress window (also referred to herein as a “forward progress monitor”), such as the one depicted in
An embodiment of an interface monitor, such as the one depicted in
In an embodiment, the interface monitor window starts asynchronously of the CRC recovery and is based on a free running counter. The interface monitor window tier (one, two, three) counters are reset at the end of the interface monitor window. In an embodiment, the escalation to the next tier does not get reset until a next CRC error causes the escalated tier to be performed.
Technical effects and benefits include the ability to recover from memory channel failures. This may lead to significant improvements in memory system availability and serviceability.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
As described above, embodiments can be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. In exemplary embodiments, the invention is embodied in computer program code executed by one or more network elements. Embodiments include a computer program product on a computer usable medium with computer program code logic containing instructions embodied in tangible media as an article of manufacture. Exemplary articles of manufacture for computer usable medium may include floppy diskettes, CD-ROMs, hard drives, universal serial bus (USB) flash drives, or any other computer-readable storage medium, wherein, when the computer program code logic is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. Embodiments include computer program code logic, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code logic is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code logic segments configure the microprocessor to create specific logic circuits.
As described above, embodiments can be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. In exemplary embodiments, the invention is embodied in computer program code executed by one or more network elements. Embodiments include a computer program product 900 as depicted in
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
This application is a continuation of U.S. patent application Ser. No. 12/822,968, filed Jun. 24, 2010, the content of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4464747 | Groudan et al. | Aug 1984 | A |
4817091 | Katzman et al. | Mar 1989 | A |
4996687 | Hess et al. | Feb 1991 | A |
5124948 | Takizawa et al. | Jun 1992 | A |
5163023 | Ferris et al. | Nov 1992 | A |
5272671 | Kudo | Dec 1993 | A |
5463643 | Gaskins | Oct 1995 | A |
5488691 | Fuoco | Jan 1996 | A |
5499253 | Lary | Mar 1996 | A |
5513135 | Dell et al. | Apr 1996 | A |
5537665 | Patel et al. | Jul 1996 | A |
5574945 | Elko et al. | Nov 1996 | A |
5655076 | Kimura et al. | Aug 1997 | A |
5680564 | Divivier | Oct 1997 | A |
5684810 | Nakamura et al. | Nov 1997 | A |
6012839 | Nguyen | Jan 2000 | A |
6125469 | Zook et al. | Sep 2000 | A |
6131178 | Fujita et al. | Oct 2000 | A |
6282701 | Wygodny et al. | Aug 2001 | B1 |
6332206 | Nakatsuji et al. | Dec 2001 | B1 |
6381685 | Dell et al. | Apr 2002 | B2 |
6418068 | Raynham | Jul 2002 | B1 |
6442726 | Knefel | Aug 2002 | B1 |
6618775 | Davis | Sep 2003 | B1 |
6715116 | Lester et al. | Mar 2004 | B2 |
6763444 | Thomann | Jul 2004 | B2 |
6820072 | Skaanning et al. | Nov 2004 | B1 |
6845472 | Walker | Jan 2005 | B2 |
6854070 | Johnson et al. | Feb 2005 | B2 |
6973612 | Rodi | Dec 2005 | B1 |
6976194 | Cypher | Dec 2005 | B2 |
6981205 | Fukushima et al. | Dec 2005 | B2 |
6988219 | Hitz et al. | Jan 2006 | B2 |
7055054 | Olarig | May 2006 | B2 |
7099994 | Thayer et al. | Aug 2006 | B2 |
7149269 | Cranford, Jr. et al. | Dec 2006 | B2 |
7149945 | Brueggen | Dec 2006 | B2 |
7191257 | Ali Khan et al. | Mar 2007 | B2 |
7200780 | Kushida | Apr 2007 | B2 |
7278086 | Banks et al. | Oct 2007 | B2 |
7313749 | Nerl et al. | Dec 2007 | B2 |
7320086 | Majni et al. | Jan 2008 | B2 |
7353316 | Erdmann | Apr 2008 | B2 |
7409581 | Santeler et al. | Aug 2008 | B2 |
7467126 | Smith et al. | Dec 2008 | B2 |
7484138 | Hsieh et al. | Jan 2009 | B2 |
7752490 | Abe | Jul 2010 | B2 |
7962803 | Huber et al. | Jun 2011 | B2 |
8041990 | O'Connor et al. | Oct 2011 | B2 |
8046628 | Resnick | Oct 2011 | B2 |
8103900 | Fry et al. | Jan 2012 | B2 |
8522122 | Alves et al. | Aug 2013 | B2 |
20020066052 | Olarig et al. | May 2002 | A1 |
20020181633 | Trans | Dec 2002 | A1 |
20020199172 | Bunnell | Dec 2002 | A1 |
20030002358 | Lee et al. | Jan 2003 | A1 |
20030023930 | Fujiwara et al. | Jan 2003 | A1 |
20030208704 | Bartels et al. | Nov 2003 | A1 |
20040034818 | Gross et al. | Feb 2004 | A1 |
20040093472 | Dahlen et al. | May 2004 | A1 |
20040123223 | Halford | Jun 2004 | A1 |
20040168101 | Kubo | Aug 2004 | A1 |
20040227946 | Li et al. | Nov 2004 | A1 |
20050060521 | Wang | Mar 2005 | A1 |
20050108594 | Menon et al. | May 2005 | A1 |
20050204264 | Yusa | Sep 2005 | A1 |
20060067449 | Boehl et al. | Mar 2006 | A1 |
20060146318 | Adam et al. | Jul 2006 | A1 |
20060156190 | Finkelstein et al. | Jul 2006 | A1 |
20060244827 | Moya | Nov 2006 | A1 |
20060248406 | Qing et al. | Nov 2006 | A1 |
20060251416 | Letner et al. | Nov 2006 | A1 |
20060282745 | Joseph et al. | Dec 2006 | A1 |
20070011562 | Alexander et al. | Jan 2007 | A1 |
20070033195 | Stange et al. | Feb 2007 | A1 |
20070047344 | Thayer et al. | Mar 2007 | A1 |
20070047436 | Arai et al. | Mar 2007 | A1 |
20070050688 | Thayer | Mar 2007 | A1 |
20070089035 | Alexander et al. | Apr 2007 | A1 |
20070101094 | Thayer et al. | May 2007 | A1 |
20070150792 | Ruckerbauer | Jun 2007 | A1 |
20070168730 | Memmi | Jul 2007 | A1 |
20070192667 | Nieto et al. | Aug 2007 | A1 |
20070201595 | Stimple et al. | Aug 2007 | A1 |
20070217559 | Stott et al. | Sep 2007 | A1 |
20070226544 | Woodhouse | Sep 2007 | A1 |
20070239379 | Newcomb et al. | Oct 2007 | A1 |
20070260623 | Jaquette et al. | Nov 2007 | A1 |
20070266275 | Stimple et al. | Nov 2007 | A1 |
20070286199 | Coteus et al. | Dec 2007 | A1 |
20080005644 | Dell | Jan 2008 | A1 |
20080010435 | Smith et al. | Jan 2008 | A1 |
20080046792 | Yamamoto et al. | Feb 2008 | A1 |
20080046796 | Dell et al. | Feb 2008 | A1 |
20080056415 | Chang et al. | Mar 2008 | A1 |
20080126828 | Girouard et al. | May 2008 | A1 |
20080130816 | Martin et al. | Jun 2008 | A1 |
20080163385 | Mahmoud | Jul 2008 | A1 |
20080168329 | Han | Jul 2008 | A1 |
20080216054 | Bates et al. | Sep 2008 | A1 |
20080222449 | Ramgarajan et al. | Sep 2008 | A1 |
20080250270 | Bennett | Oct 2008 | A1 |
20080266999 | Thayer | Oct 2008 | A1 |
20080285449 | Larsson et al. | Nov 2008 | A1 |
20080313241 | Li et al. | Dec 2008 | A1 |
20090006886 | O'Connor | Jan 2009 | A1 |
20090006900 | Lastras-Montano et al. | Jan 2009 | A1 |
20090024902 | Jo et al. | Jan 2009 | A1 |
20090049365 | Kim et al. | Feb 2009 | A1 |
20090052600 | Chen et al. | Feb 2009 | A1 |
20090106491 | Piszczek et al. | Apr 2009 | A1 |
20090164715 | Astigarraga et al. | Jun 2009 | A1 |
20090177457 | Dai et al. | Jul 2009 | A1 |
20090193315 | Gower et al. | Jul 2009 | A1 |
20090228648 | Wack | Sep 2009 | A1 |
20090287890 | Bolosky | Nov 2009 | A1 |
20100005281 | Buchmann et al. | Jan 2010 | A1 |
20100005345 | Ferraiolo et al. | Jan 2010 | A1 |
20100082066 | Sivaramakrishnam et al. | Apr 2010 | A1 |
20100107148 | Decker et al. | Apr 2010 | A1 |
20100162033 | Ahn et al. | Jun 2010 | A1 |
20100182056 | Liang et al. | Jul 2010 | A1 |
20100205367 | Ehrlich et al. | Aug 2010 | A1 |
20100214063 | Rofougaran | Aug 2010 | A1 |
20100217915 | O'Connor et al. | Aug 2010 | A1 |
20100218051 | Walker et al. | Aug 2010 | A1 |
20100220828 | Fuller et al. | Sep 2010 | A1 |
20100241899 | Mayer et al. | Sep 2010 | A1 |
20100269021 | Gower et al. | Oct 2010 | A1 |
20100293532 | Andrade et al. | Nov 2010 | A1 |
20100306489 | Abts et al. | Dec 2010 | A1 |
20100306574 | Suzuki et al. | Dec 2010 | A1 |
20100332909 | Larson | Dec 2010 | A1 |
20110051854 | Kizer et al. | Mar 2011 | A1 |
20110075782 | Zhang et al. | Mar 2011 | A1 |
20110078496 | Jeddeloh | Mar 2011 | A1 |
20110126079 | Wu et al. | May 2011 | A1 |
20110173162 | Anderson et al. | Jul 2011 | A1 |
20110320864 | Gower et al. | Dec 2011 | A1 |
20110320869 | Gower et al. | Dec 2011 | A1 |
20110320881 | Dodson et al. | Dec 2011 | A1 |
20110320914 | Alves et al. | Dec 2011 | A1 |
20120233500 | Roettgermann et al. | Sep 2012 | A1 |
Number | Date | Country |
---|---|---|
11144491 | May 1999 | JP |
2006029243 | Mar 2006 | WO |
Entry |
---|
Chen, P. M., et al.; “RAID: High Performance, Reliable Secondary Storage”; ACM Computing Surveys; ACM, New York, NY, US vol. 26, No. 2, Jun. 1, 1994, pp. 145-185. |
D. Wortzman; “Two-Tier Error Correcting Code for Memories”; vol. 26, #10, pp. 5314-5318; Mar. 1984. |
EP Application No. 08760760.2 Examination Report dated Jun. 10, 2010, 7 pages. |
EP Application No. 08760760.2 Examination Report dated Jul. 23, 2012, 7 pages. |
International Search Report and Written Opinion for PCT/EP2008/057199 dated Mar. 23, 2009, 10 pages. |
International Search Report and Written Opinion for PCT/EP2011/058924 dated Nov. 9, 2011, 9 pages. |
L.A. Lastras-Montano; “A new class of array codes for memory storage”; Version—Jan. 19, 2011, 9 pages. |
System Z GF (65536) x8 RAIM Code—Mar. 12, 2010, pp. 1-22. |
The RAIDBook—A Source Book for RAID Technology by the RAID Advisory Board, Lino Lakes, MN, Jun. 9, 1993—XP002928115; pp. 1-113. |
U.S. Appl. No. 12/822,868, filed Jun. 24, 2010; Non-Final Office Action dated Mar. 29, 2013; 23 pages. |
U.S. Appl. No. 12/822,968, filed Jun. 24, 2010; Non-Final Office Action dated Oct. 11, 2012; 23 pages. |
U.S. Appl. No. 12/822,868, filed Jun. 24, 2010; Notice of Allowance dated Sep. 5, 2013; 15 pages. |
Number | Date | Country | |
---|---|---|---|
20130191683 A1 | Jul 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12822968 | Jun 2010 | US |
Child | 13793363 | US |