The present disclosure relates in general to information handling systems, and more particularly to systems and methods for improvement of performance and signal integrity in memory systems.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Information handling systems often use storage resources (e.g., hard disk drives and/or arrays thereof) to store data. Typically, storage solutions are software-based or hardware-based. Software-based solutions may value hardware agnosticity at the sacrifice of performance. Hardware-based solutions may achieve higher performance with smaller solutions and lower power, but may require specialized hardware and firmware that are tightly coupled to one another. With the advent and momentum in the industry of Software-Defined Storage, storage software may increasingly be executed on commodity servers, which may be less efficient due to absence of hardware-accelerated silicon devices and resistance to “locking-in” to a single vendor. Accordingly, architectures need to solve for either performance or hardware agnosticity.
In accordance with the teachings of the present disclosure, the disadvantages and problems associated with storage systems may be reduced or eliminated.
In accordance with embodiments of the present disclosure, a memory system may include a memory module comprising a plurality of memory chips configured to store data and a hardware accelerator communicatively coupled to the memory chips and configured to, in response to an input/output operation to a storage resource, perform a storage function to assist movement and calculation of data in the memory system associated with the input/output operation.
In accordance with these and other embodiments of the present disclosure, a method may include receiving, at a hardware accelerator of a memory module comprising the hardware accelerator and a plurality of memory chips communicatively coupled to the hardware accelerator, an indication of an input/output operation to a storage resource. The method may also include in response to an input/output operation to a storage resource, performing a storage function to assist movement and calculation of data in a memory system associated with the input/output operation.
In accordance with these and other embodiments of the present disclosure, an information handing system may include a processor and a memory module comprising a plurality of memory chips configured to store data and a hardware accelerator communicatively coupled to the memory chips and configured to, in response to an input/output operation to a storage resource, perform a storage function to assist movement and calculation of data in a memory system associated with the input/output operation.
Technical advantages of the present disclosure may be readily apparent to one skilled in the art from the figures, description and claims included herein. The objects and advantages of the embodiments will be realized and achieved at least by the elements, features, and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are examples and explanatory and are not restrictive of the claims set forth in this disclosure.
A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
Preferred embodiments and their advantages are best understood by reference to
For the purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, entertainment, or other purposes. For example, an information handling system may be a personal computer, a personal digital assistant (PDA), a consumer electronic device, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include memory, one or more processing resources such as a central processing unit (“CPU”) or hardware or software control logic. Additional components of the information handling system may include one or more storage devices, one or more communications ports for communicating with external devices as well as various input/output (“I/O”) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communication between the various hardware components.
For the purposes of this disclosure, computer-readable media may include any instrumentality or aggregation of instrumentalities that may retain data and/or instructions for a period of time. Computer-readable media may include, without limitation, storage media such as a direct access storage device (e.g., a hard disk drive or floppy disk), a sequential access storage device (e.g., a tape disk drive), compact disk, CD-ROM, DVD, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), and/or flash memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic and/or optical carriers; and/or any combination of the foregoing.
For the purposes of this disclosure, information handling resources may broadly refer to any component system, device or apparatus of an information handling system, including without limitation processors, service processors, basic input/output systems, buses, memories, I/O devices and/or interfaces, storage resources, network interfaces, motherboards, and/or any other components and/or elements of an information handling system.
Processor 103 may include any system, device, or apparatus configured to interpret and/or execute program instructions and/or process data, and may include, without limitation a microprocessor, microcontroller, digital signal processor (DSP), application specific integrated circuit (ASIC), or any other digital or analog circuitry configured to interpret and/or execute program instructions and/or process data. In some embodiments, processor 103 may interpret and/or execute program instructions and/or process data stored and/or communicated by one or more of memory system 104, storage resource 106, and/or another component of information handling system 102. As shown in
Memory controller 108 may be any system, device, or apparatus configured to manage and/or control memory system 104. For example, memory controller 108 may be configured to read data from and/or write data to memory modules 116 comprising memory system 104. Additionally or alternatively, memory controller 108 may be configured to refresh memory modules 116 and/or memory chips 110 thereof in embodiments in which memory system 104 comprises DRAM. Although memory controller 108 is shown in
Memory system 104 may be communicatively coupled to processor 103 and may comprise any system, device, or apparatus operable to retain program instructions or data for a period of time (e.g., computer-readable media). Memory system 104 may comprise random access memory (RAM), electrically erasable programmable read-only memory (EEPROM), a PCMCIA card, flash memory, magnetic storage, opto-magnetic storage, or any suitable selection and/or array of volatile or non-volatile memory that retains data after power to information handling system 102 is turned off. In particular embodiments, memory system 104 may comprise dynamic random access memory (DRAM).
As shown in
Each memory module 116 may include any system, device or apparatus configured to retain program instructions and/or data for a period of time (e.g., computer-readable media). A memory module 116 may comprise a dual in-line package (DIP) memory, a dual-inline memory module (DIMM), a Single In-line Pin Package (SIPP) memory, a Single Inline Memory Module (SIMM), a Ball Grid Array (BGA), or any other suitable memory module. In some embodiments, memory modules 116 may comprise double data rate (DDR) memory.
As depicted in
Each memory rank 118 within a memory module 116 may be a block or area of data created using some or all of the memory capacity of the memory module 116. In some embodiments, each rank 118 may be a rank as such term is defined by the JEDEC Standard for memory devices. As shown in
A hardware accelerator 120 may be communicatively coupled to memory controller 108 and one or more ranks 118. A hardware accelerator 120 may include any system, device, or apparatus configured to perform storage functions to assist data movement, as described in greater detail elsewhere in this disclosure. For example, an example storage function may comprise calculations associated with RAID 5, RAID 6, erasure coding, functions such as hash lookup, Data Integrity Field (DIF)/Data Integrity Extension (DIX), and/or table functions such as a redirection table. Hardware accelerator 120 may comprise an application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), or other suitable processing device.
Storage resource 106 may be communicatively coupled to processor 103. Storage resource 106 may include any system, device, or apparatus operable to store information processed by processor 103. Storage resource 106 may include, for example, network attached storage, one or more direct access storage devices (e.g., hard disk drives), and/or one or more sequential access storage devices (e.g., tape drives). As shown in
In some embodiments, storage resource 106 may comprise a single physical storage resource (e.g., hard disk drive). In other embodiments, storage resource 106 may comprise a virtual storage resource comprising multiple physical storage resources arranged in an array (e.g., a Redundant Array of Inexpensive Disks or “RAID”) as is known in the art.
As shown in
In addition to processor 103, memory system 104, and storage resource 106, information handling system 102 may include one or more other information handling resources.
At step 202, a software RAID via operating system 114 may issue an input/output operation to storage resource 106, for which a portion of memory system 104 may serve as a cache (e.g., a write-back cache) for storage resource 106. At step 204, in connection with the input/output operation, memory controller 108 may address hardware accelerator 120 within memory system 104. At step 206, hardware accelerator 120 may perform a storage function to assist movement and computation of data in a memory module 116 of memory system 104.
Although
Method 200 may be implemented using hardware accelerator 120, and/or any other system operable to implement method 200. In certain embodiments, method 200 may be implemented partially or fully in software and/or firmware embodied in computer-readable media.
At step 302, operating system 114 may issue a write input/output operation to storage resource 106, which may implement a RAID 5 and for which a portion of memory system 104 may serve as a cache (e.g., a write-back cache) for storage resource 106. At step 304, in connection with the input/output operation, memory controller 108 may communicate a cache operation to memory system 104 by addressing a memory module 116. In response, hardware accelerator 120 may perform the storage function of parity calculation to assist movement and computation of data in such memory module 116 of memory system 104. For example, at step 306, hardware accelerator 120 may copy the data of the write operation to one or more memory addresses in memory system 104. At step 308, in response to a software command or Direct Memory Access (DMA) operation, existing parity data (e.g., parity data existing prior to the write operation) may be read from storage resource 106 and written to a memory module 116. Hardware accelerator 120 may receive the parity data and may write the parity data or perform a logical exclusive OR (XOR) operation with the received parity data and new data associated with the write operation and write the result to a memory address in memory system 104. At step 310, in response to a software command or DMA operation, data being overwritten by a write operation from storage resource 106 may be read from storage resource 106 and written to a memory module 116. Hardware accelerator 120 may receive the parity data and may write or XOR with new data of the write operation to memory address in memory system 104. At step 312, hardware accelerator 120 may calculate new parity data (e.g., new parity data equals the logical exclusive OR of the existing parity data, the data being overwritten, and the new data written as a result of the write operation).
At step 314, in response to a software command or DMA operation, data from the write operation may be read from memory module 116 and written to storage resource 106. At step 316, in response to a software command or DMA operation, the new parity data may be read from memory module 116 and written to storage resource 106.
Although
Method 300 may be implemented using hardware accelerator 120, and/or any other system operable to implement method 300. In certain embodiments, method 300 may be implemented partially or fully in software and/or firmware embodied in computer-readable media.
To perform its functionality, hardware accelerator may operate in accordance with an application programming interface (API). For example, information that hardware accelerator 120 may communicate from a memory module 116 may include a memory range within volatile memory of a memory map (e.g., memory map 414), a memory map range of non-volatile memory 122, serial presence detect addressing an information, non-volatile memory 122 addressing an information, RAID levels supported (e.g., RAID1, 5, 6, etc.), whether support is included for one pass or multi-pass generation, and status flags (e.g., setting a complete status flag when parity generation is complete). As another example, information that hardware accelerator 120 may receive (e.g., from a RAID controller) may include various information regarding each respective RAID group (e.g., RAID group identity, strip size, number of physical storage resources in a RAID group, identity of drives in the RAID group), stripe size, logical block address (LBA) range of a RAID group, RAID type (e.g., RAID 1, 5, 6, etc.), disk data format, LBA ranges of strips, identities of updated data strips and parity strips per respective physical storage resource, identities of failed physical storage resources, identities of peer physical storage resources of failed physical storage resources, and identities of target physical storage resources for rebuild operations.
A second embodiment may be similar to that of the first embodiment above, except that hardware accelerator 120 may maintain a single scratchpad buffer, and parity creation may be a background operation, once data transfer from physical storage resources is complete. In such embodiments, a status flag may be needed to indicate when the background operation is complete.
A third embodiment may be similar to the first embodiment above, with the exception that some of memory modules 110 (e.g., memory modules shown in
A fourth embodiment may be similar to the third embodiment, except that hardware accelerator 120 may maintain a single scratchpad buffer, and parity creation may be a background operation, once data transfer from physical storage resources is complete. In such embodiments, a status flag may be needed to indicate when the background operation is complete.
As used herein, when two or more elements are referred to as “coupled” to one another, such term indicates that such two or more elements are in electronic communication or mechanical communication, as applicable, whether connected indirectly or directly, with or without intervening elements.
This disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Moreover, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, or component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative.
All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the disclosure and the concepts contributed by the inventor to furthering the art, and are construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the disclosure.