A data storage system environment typically includes one or more host computing devices (“hosts”) in communication with one or more storage arrays. A host typically executes an application program (e.g., a database application) which requires data associated with the application to be stored locally (i.e., on the host), remotely (i.e., on one of the storage arrays), or stored both locally and remotely. The host typically includes memory devices that provide both volatile random access memory capacity (e.g., Dynamic RAM or DRAM devices) and non-volatile random access memory capacity (e.g., flash memory devices). The storage array typically includes storage devices that provide non-volatile random access storage capacity (e.g., flash memory devices) and non-volatile large storage capacity (e.g., hard disk drives (HDDs) and tape drives). In general, random access memory is used to satisfy high throughput and/or bandwidth requirements of a given application program while the hard disk and tape drives are used to satisfy capacity requirements.
In a data storage environment, the ability to define multiple, independent memory tiers is desirable. A memory tier is typically constructed by memory mapping a region of a storage class memory (SCM) device (e.g., a flash memory) or a region of an array storage device into the process's virtual memory address space. The memory-mapped regions may be fronted by a DRAM page cache to which the application issues loads and stores. Memory tiering applications move data between the SCM (or array device) and the DRAM page cache on an on-demand page basis.
In one aspect of the invention, a method comprises: providing region zero-fill on demand for tiered memory including a first region in a first memory tier having a page cache in physical memory, where virtual memory includes a mmap of the first region; and controlling an input between zeroes and the first region to the page cache.
The method can further include one or more of the following features: controlling a multiplexer having an output coupled to the page cache and input coupled to the first region and a zero fill module, controlling a per region attribute to selectively suppress data transfer from the storage to the page cache, the physical memory comprises SCM storage, and/or the page cache comprises DRAM memory.
In another aspect of the invention, an article comprises: a non-volatile computer-readable storage medium having stored instructions that enable a machine to: provide region zero-fill on demand for tiered memory including a first region in a first memory tier having a page cache in physical memory, where virtual memory includes a mmap of the first region; and control an input between zeroes and the first region to the page cache.
The article can further include one or more of the following features: controlling a multiplexer having an output coupled to the page cache and input coupled to the first region and a zero fill module, controlling a per region attribute to selectively suppress data transfer from the storage to the page cache, the physical memory comprises SCM storage, and/or the page cache comprises DRAM memory.
In a further aspect of the invention, a system comprises: a memory; and a processor coupled to the memory, the processor the memory configured to: provide region zero-fill on demand for tiered memory including a first region in a first memory tier having a page cache in physical memory, where virtual memory includes a mmap of the first region; and control an input between zeroes and the first region to the page cache.
The system can be further configured to include one or more of the following features: controlling a multiplexer having an output coupled to the page cache and input coupled to the first region and a zero fill module, controlling a per region attribute to selectively suppress data transfer from the storage to the page cache, the physical memory comprises SCM storage, and/or the page cache comprises DRAM memory.
The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following description of the drawings in which:
Embodiments of the present invention will be described herein with reference to illustrative computing systems, data memory and storage systems, and associated servers, computers, memory devices, storage devices and other processing devices. It is to be appreciated, however, that embodiments of the invention are not restricted to use with the particular illustrative system and device configurations shown.
Before describing embodiments of the concepts, structures, and techniques sought to be protected herein, some terms are explained. The phrases “computer,” “computing system,” “computing environment,” “processing platform,” “data memory and storage system,” and “data memory and storage system environment” as used herein with respect to various embodiments are intended to be broadly construed, so as to encompass, for example, private or public cloud computing or storage systems, or parts thereof, as well as other types of systems comprising distributed virtual infrastructure and those not comprising virtual infrastructure.
The terms “application,” “program,” “application program,” and “computer application program” herein refer to any type of software application, including desktop applications, server applications, database applications, and mobile applications. The terms “application process” and “process” refer to an instance of an application that is being executed within a computing environment. As used herein, the term “object” refers to a logic grouping of data within an application, including primitive data types (e.g. integers and characters), arrays, trees, structures, unions, hashes, etc. The term “object reference” herein refers to any type of reference to an object, such as a pointer.
The term “source code” refers to computer instructions in a high-level programming language, such as C, C++, JAVA, Ruby, Python, etc. The term “machine code” refers to: (1) a set of instructions executed directly by a computer's processor or a virtual machine processor; and (2) such instructions expressed in the assembly language. The term “compiler directive” is used herein to refer to reserved syntax which controls the operation of a compiler and which is separate from normal computer instructions. Non-limiting examples of compiler directives include pre-processor directives and storage classes.
The term “memory” herein refers to any type of computer memory accessed by an application using memory access programming semantics, including, by way of example, dynamic random access memory (DRAM) and memory-mapped files. Typically, reads or writes to underlying devices is done by an operating system (OS), not the application. As used herein, the term “storage” refers to any resource that is accessed by the application via input/output (I/O) device semantics such as read and write systems calls. In certain instances, the same physical hardware device could be accessed by the application as either memory or as storage.
As used herein, the term “tiering” refers to the placement of information on an infrastructure resource commensurate with implementation of a defined policy. Such policies can take factors into account a variety of factors including, but not limited to: information utilization usage statistics (e.g., I/O reads, writes, memory access); customer information values associated with levels of service (e.g., gold, silver, bronze, production, test, sandbox, archive); and any other custom tiering stratification criteria.
The application hosts 102 are configured to execute applications, such as database applications. An application host 102 is typically a server (e.g., a Windows server, a Sun Solaris server, an HP server, a Linux server, etc.) upon which the application executes. A storage array 106, which may be a storage area network (SAN) array, comprises one or more storage products such as, by way of example, VNX and Symmetrix VMAX, both commercially available from EMC
Corporation of Hopkinton, Mass. A variety of other storage products may be utilized to implement at least a portion of a storage array.
In general operation, an application host executes the application using local memory resources and issues read and write requests (“commands”) to a storage array 106. The storage array 106 is configured with storage resources used to store backend data files. The storage array 106 processes read and write commands received from the application host and, in the case of a read request, sends data stored thereon back to the requesting host.
In one aspect, the illustrative environment 100 provides a memory and storage tier architecture (or “structure”). The tier structure comprises one or more tiers resident on an application host 102 and one or more tiers resident on a storage array 106. As discussed further below, applications residing on the application hosts 102 determine (either automatically or in response to user input) on which of the various tiers to store data associated with the execution of an application.
The SCM 214 tier comprises one or more SCM devices. Non-limiting examples of SCM devices include NAND flash, solid state drives (SSDs), next generation non-volatile memory (NVM) drives/cards/dual in-line memory modules (DIMMs), NAND RAM, phase change memory (PCM) RAM, and spin torque (ST) RAM. In embodiments, an SCM device is connected via a PCI-E bus.
In one aspect, the tier structure 200 provides a memory tiering layer 228 (via memory tiers 212 and 214), a cross-domain tiering layer 230 (via SCM I/O accessible tiers 216 and 222), and a legacy storage tiering layer 232 (via storage tiers 224 and 226). Thus, an application can make data placement selections end-to-end (i.e., across the memory tiering layer, the cross-domain tiering layer, and the legacy storage tiering layer) or within a single within a tiering layer.
In embodiments, the SCM 314 is exposed to an application as an “extended” tier of memory available for use that has performance characteristics that are different from the DRAM. Such performance characteristics can be taken into account when deciding what data to place into extended memory tiers. For example, some characteristics of extended memory tiers include, but are not limited to: SCM is directly accessible as memory; SCM significantly increases available capacity for all three memory allocation components, i.e., dynamically allocated memory (malloc), memory-mapped (mmap) files, and disk-backed memory; a page fault handler can still move (4 KB) pages in and out from storage for memory-mapped file pages; and a FileIO stack reads in pages for disk-backed memory, with writes being accomplished either synchronously or asynchronously.
The illustrative virtual memory address space 300 may correspond to the address space of a process executing on an application host (e.g., a host 102 in
By way of example, a storage tiering policy can be specified in a number ways, such as by an application partner via the specified on-boarding mechanism, which may be advantageous when the control point for information management is at the memory level. Also, a storage tiering policy can be specified by a storage administrator using a legacy storage tiering policy and management controls.
Before describing the above-mentioned on-boarding mechanisms, we describe below illustrative use cases (i.e., I/O data model and fully in-memory data model) for the data memory and storage tiering embodiments of the invention.
A conventional approach to database applications is to utilize a buffered I/O data model. This is the model employed by many commercial database products. The buffered I/O data model assumes that memory, even virtual memory, is a limited resource relative to the amount of data in the working set of the database. In order to accommodate this limitation with a minimum impact to database workload performance, the most recently used database data (in chunks referred to here as pages) is retained and/or prefetched into a cache buffer (412) in the database host(s) virtual memory so that it can be utilized without expensive storage I/O access time. It is to be appreciated that there are many straightforward permutations to this basic buffered I/O data model approach.
It is quite advantageous for systems built on this buffered I/O model to have their entire working sets contained within those buffers. From a response time perspective, I/O is the most expensive operation, so the more that it can be avoided by extensive usage of buffer caches the better. Design inefficiencies exist in some database products where very large caches cannot be effectively utilized, but as a general principal, I/O avoidance is desirable.
Embodiments allow applications that employ a buffered I/O model to design for tiered buffer caches for expanding the amount of working set kept in buffer caches. Rather than simply utilizing a single large cache of database pages on the assumption that the access time to all data within a cache will be the same, an architecture is implemented such that rarely accessed, or purposely archived data can be put into a cache with slower access times. This cache (414) is backed by storage class memory with memory mapped files, rather than by traditional DRAM/virtual memory constructs (in 412). This memory tiering capability is particularly useful for applications which are tethered to the buffered I/O architecture, however, benefit from having immediate cached access to data which has aged-out and will only be accessed on important boundary conditions such as monthly batch cycle processing, or has been purposefully archived. Previously, such data could only be access via a completely I/O driven paradigm.
One limitation of the buffered I/O cache model and the full in-memory model, which will be discussed below, is persistence across a reboot of the application host(s). DRAM is volatile, meaning that any data that is placed into it is lost upon power failure or other restart events. Repopulation of the application cache is a major operation concern for all application vendors that rely on large DRAM-based caches. This is typically accounted for with a variety of business continuity techniques from page-level cache mirroring, through transaction replication redundancy across hosts. Embodiments can improve on these capabilities by offering the ability to have non-volatile caches. Storage class memory is a non-volatile resource, and when used to construct and utilize memory-mapped files to be consumed as application cache resource, the system now has the ability to have a non-volatile application cache that does not require expensive repopulation after a host outage. The customer may additionally be able to avoid complex and expensive replication schemes that are intended merely as a way to avoid extensive outage time after a failure.
This capability is useful when tiered memory and memory caches are used to house archived, aged-out, or rarely used data. A customer is not going to want to hold up the restart of production operations waiting for the repopulation of second tier information. It should be noted that this model is not the same as a statistically determined spillover tier (which will be discussed below in the context of
As mentioned above, there is a spillover use case with respect to the buffered I/O data model. Based on some page aging criteria, the system can evict pages from the primary DRAM-backed page cache (512) and place the pages into an SCM flash-based cache (514) instead. Thus, embodiments enable a spillover capability that allows an application developer/vendor to take advantage of a persisted, replicated spillover that can be further tiered down into the traditional SAN storage layer (522, 524, and 526).
It is realized that the buffered I/O data model may have disadvantages for scalability. For example, there is an assumption that I/O is the primary mechanism for retrieval of information. As such, systems that utilize this approach may be inefficient for processing data in a full in-memory model, even when all data happens to be in the cache buffers and no actual I/O is required.
For this reason, a class of fully in-memory database products has been developed in the market. A fully in-memory database is one in which, during query operations, there is no facility to fetch data from external storage resources via an I/O mechanism. Data resides in virtual (or physical) memory, and is accessed with a memory semantic, or an error is produced by the database engine. These fully in-memory database engines today may still issue I/O in order to persist transaction changes to durable media. In essence, there is still both a memory model and an I/O model being employed, however, it is expected that the user is never waiting on application-generated I/O for the return of business critical information.
One of the design problems with these fully in-memory systems is scalability. Since the amount of memory that can be resident on a single host has a defined limit, these systems will either live within those limitations, or scale out by adding additional nodes in a clustered configuration, which can increase cost. When adding additional nodes, the customer is adding more than just memory. They are adding other expensive components within the cluster node as well, even though they may not have had a need for that resource or the power and space it consumes. Such systems may not be able to map the value of information to the cost of the resources they consume. For example, information resides in DRAM memory, which is a relatively expensive resource. Even when information is not used, or in an archive-worthy status, it must still reside in memory if it is to be utilized within the database model. Another issue is operational complexity. Adding nodes to existing production systems, and increasing the communication flow between nodes, simply to add data capacity, may be inefficient.
Embodiments allow database systems characterized as in-memory to utilize tiers of memory, storage class memory (SCM) accessed as memory mapped files, storage class memory (SCM) accessed as local storage, and the full palette of tiered SAN storage as part of their data access model, see, e.g., the tiering architectures of
The approach to tiering at the storage layer can be characterized by: the creation of tiering policies by a storage administrator; and the automated movement of data between storage tiers based upon data collection and intelligent statistical processing applied in adherence to the desired policies. However, the approach to tiering at the host application/memory layer is somewhat different. At this layer of the application stack, the application administrator may determine the tiering policies. These may be fundamentally different from the goals, methods and implementation of a storage tiering policy.
For instance, a storage administrator may specify gold, silver, and bronze tiers of storage that correspond to I/O performance and replication attributes. The tiering of memory may more appropriately be set by an application administrator to indicate data attributes such as archive versus active, or other application-aware boundary conditions. But more specifically, the statistics about how data is used in memory within the application context is not directly available to the infrastructure provider. For these reasons, the decisions for placement of information into specific memory tiers lie with application providers.
In step 602, the application vendor exposes memory tiering policies to customers based on their needs and design. By the term “expose” as used herein, it is meant to enable or make available for selection. In step 604, the system exposes available choices and options for memory mapped files via one or more SDK-enabled interfaces (626). In step 606, the system exposes available optional choices for further tiering into the local and SAN storage layers. In step 608, the system exposes available optional choices for replication of persisted memory resources. In step 610, selections are externalized and persisted in a configuration area. In one embodiment, the configuration area is a persistent data store output from module 620, and is created by the application vendor. In step 612, the application vendor issues an SDK-enabled interface that performs mmap( ) functions for selected memory mapped files (632). In step 614, the application uses these memory mapped regions. In step 616, based upon options selected, replication of memory mapped files (632), VFCache (630), and the FAST (640) subsystem are engaged when data is persisted.
Embodiments also provide for the creation of other SDK tools that make it easier for application developers/vendors to utilize SCM-backed memory. For instance, SDK tools are configured to safely ease the movement of information from traditional DRAM memory locations to SCM-backed memory locations.
We now turn to a description of persisting anonymous memory to multiple tiers of storage.
As shown, in order to persist anonymous memory to permanent storage, the following operating system processes are employed on the memory mapped files that are in use.
Step 702: malloc( )—this step allocates a number of bytes and returns a pointer to the allocated memory. This steps allocates a new amount of memory in the memory mapped file.
Step 704: store—load and store operations refer to how data in memory is made accessible for processing by a central processing unit (CPU). The store operation copies the values from a CPU's store data pipeline (or analogous architecture) to memory. Each CPU architecture has a different memory subsystem. The store operation is specified here to indicate this as part of the path for persisting anonymous memory out to the array.
Step 706: Page buffer evict—this step evicts a buffer of data out of virtual memory.
Step 708: Page cache evict—this step then evicts a page out of the physical page cache.
Step 710: msync and/or checkpoint—this step flushes all changes made to the memory mapped files (632) out to the VFCache (634) and thus to the underlying array (636).
Once employed, the above methodology ensures that changes made in memory to these structures will be persisted to the local storage under control of the host operating system. Furthermore, if the files are being serviced by a VFCache filter driver configuration, this data can/will also be persisted to other external arrays and device types, for example, pursuant to the designs of VFCache and FAST policies, as well as direct attached storage (DAS) policies.
We now turn to a description of remote replication/restore of memory images.
Since memory images are persisted as described above, they are eligible to be replicated by a replication product such as, by way of example only, Recoverpoint or Symmetrix Remote Data Facility, both commercially available from EMC Corporation of Hopkinton, Mass. As further described above, during the on-boarding process, replication options for persisted memory images can be chosen. The implementation of these options is transparent to the user via a programmatic interface with the underlying replication technology (Recoverpoint, SRDF, etc.). Recovery options can either be exposed through an on-boarding interface as well or through existing product recovery interfaces or both.
As shown, a memory image is replicated from host/array set A to host/array set B (or vice versa). It is assumed that a given memory image is persisted to permanent storage using steps similar to those described above in
It is to be appreciated that the various components and steps illustrated and described in
As shown, the cloud infrastructure 900 comprises virtual machines (VMs) 902-1, 902-2, . . . , 902-M implemented using a hypervisor 904. The hypervisor 904 runs on physical infrastructure 905. The cloud infrastructure 900 further comprises sets of applications 910-1, 910-2, . . . , 910-M running on respective ones of the virtual machines 902-1, 902-2, . . . , 902-M (utilizing associated logical storage units or LUNs) under the control of the hypervisor 904.
As used herein, the term “cloud” refers to a collective computing infrastructure that implements a cloud computing paradigm. For example, as per the National Institute of Standards and Technology (NIST Special Publication No. 800-145), cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Although only a single hypervisor 904 is shown in the example of
As is known, virtual machines are logical processing elements that may be instantiated on one or more physical processing elements (e.g., servers, computers, or other processing devices). That is, a “virtual machine” generally refers to a software implementation of a machine (i.e., a computer) that executes programs in a manner similar to that of a physical machine. Thus, different virtual machines can run different operating systems and multiple applications on the same physical computer. Virtualization is implemented by the hypervisor 904 which, as shown in
An example of a commercially available hypervisor platform that may be used to implement portions of the cloud infrastructure 900 in one or more embodiments of the invention is vSphere which may have an associated virtual infrastructure management system such as vCenter, both commercially available from VMware Inc. of Palo Alto, Calif. The underlying physical infrastructure 905 may comprise one or more distributed processing platforms that include storage products such as VNX and Symmetrix VMAX, both commercially available from EMC
Corporation of Hopkinton, Mass. A variety of other storage products may be utilized to implement at least a portion of the cloud infrastructure 900.
An example of a processing platform on which the cloud infrastructure 900 may be implemented is processing platform 1000 shown in
The processing device 1002-1 in the processing platform 1000 comprises a processor 1010 coupled to a memory 1012. The processor 1010 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements. The memory 1012, specifically shown in
Also included in the processing device 1002-1 is network interface circuitry 1014, which is used to interface the processing device with the network 1006 and other system components. Such circuitry may comprise conventional transceivers of a type well known in the art. The other processing devices 1002 of the processing platform 1000 are assumed to be configured in a manner similar to that shown for processing device 1002-1 in the figure.
The processing platform 1000 shown in
Also, numerous other arrangements of servers, computers, storage devices or other components are possible for implementing components shown and described in
In a data storage environment comprised of combined memory tiering and storage tiering, the ability to create multiple, independent memory tiers is desirable. In one embodiment, the memory tiers MT1-m are constructed by memory mapping a region of a storage class memory (SCM) device or a region of an array storage device into virtual address space for a process.
Each memory mapped region in tiered memory 1104 is fronted by a DRAM page cache 1102 to which an application issues loads and stores. The memory management mechanism moves data between the SCM or array device and the DRAM page cache on an on-demand page basis.
In general, there can be multiple memory tiers and multiple DRAM page caches. Each memory tier may have its own DRAM page cache or may be arranged in groups where each group of memory tiers has its own DRAM page cache. The DRAM page caches are sized and managed independently from each other allowing for differing quality of services and isolation between multiple tenants.
In another aspect of the invention, an application manages a cache replacement policy by setting a color hint to a page or range of pages. A physically indexed CPU cache operates so that addresses in adjacent physical memory blocks are in different positions or cache lines. For virtual memory, when virtually adjacent but not physically adjacent memory blocks are allocated, the same position in the cache could be taken. So-called cache coloring is a memory management technique that selects pages so that there is no contention with neighboring pages.
Free pages that are contiguous from a CPU perspective are allocated in order to maximize the total number of pages cached by the processor. When allocating sequential pages in virtual memory for processes, the kernel collects pages with different “colors” and maps them to the virtual memory.
A cache replacement policy refers to the way existing cache entries are replaced with new entries. For a cache miss, the cache may have to evict one of the existing entries. It is desired to evict the existing cache entry that is least likely to be used in the future. For example, one known replacement policy is the least-recently used (LRU) policy that replaces the least recently accessed entry.
It is understood that a ‘hotter’ page is defined as a page which is either accessed more frequently than another (colder′) page or that the importance of the data within the page is such that the access cost should be minimized in terms of latency and/or throughput regardless of access frequency.
The memory management mechanism moves data between the SCM or array device and the DRAM page cache 1204 on an on-demand page basis. The application can specify the importance of ranges of addresses within the memory tier MT1 by communicating a page color. The various page color values are in turn used to modify the eviction mechanism managing the page cache 1204 replacement policy 1208. For example, pages colored with an ‘important’ color would be moved earlier in an LRU list where pages colored with a less ‘important’ color value would be moved further down in an LRU list. This allows the application to describe ranges that need not be kept in cache while favoring ranges that should be kept in cache.
The region utility provides the ability to manage regions on the block device 1300. It is understood that a region 1302 is a block-aligned region similar to a disk partition. In illustrative embodiments, there is support for an unlimited number of regions on a single block device.
Individual regions 1302 can be block aligned and granular to a unique block size if desired. In one embodiment, the minimum block size is 4K and there is no maximum limit. Each region 1302 is named with a NULL terminated string so retrieval and management of regions is name based. If a region table becomes too fragmented, regions can be moved automatically (defragmentation) on the block device to create more free linear space. A user region is a linear block of storage on the block device 1300. A user region can become a separate block device, or used directly by applications.
It is understood that regions 1302 are similar to disk partitions. A minimum block size of 4096 corresponds to the typical native block size for non-volatile based flash storage devices. Larger block sizes can be used if desired. Regions can be created, deleted, and defragmented dynamically using the region utility.
As shown in
The block region structure defines the region. For example, a region has an offset in bytes from the beginning of the physical device, a size in bytes of the region, a block size for the region, a null terminated region name, region flags, and a region state. An illustrative code listing below shows one embodiment of a block region structure.
In one embodiment, each region has a state. Illustrative states are described below.
In embodiments, each region has a set of flags. Illustrative flags are described below.
In an illustrative embodiment, a per-region attribute is provided which when set on the first access to the page cache 1402 within the memory mapped region R1, suppresses data transfer from the SCM or array storage 1400 and provides a zeroed page. This eliminates an additional data transfer.
When an access modifies a portion of a mmaped page, the page is first paged into the page cache. In the event the modifying access is the first access to ever touch a particular page, there is no need to page-in the uninitialized page from the region but instead provide a page of zeros instead.
Processing may be implemented in hardware, software, or a combination of the two. Processing may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer.
Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate.
Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).
Having described exemplary embodiments of the invention, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may also be used. The embodiments contained herein should not be limited to disclosed embodiments but rather should be limited only by the spirit and scope of the appended claims. All publications and references cited herein are expressly incorporated herein by reference in their entirety.
This application claims the benefit of U.S. Patent Application No. 62/004,163, filed on May 28, 2014, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4429363 | Duke et al. | Jan 1984 | A |
5276867 | Kenley et al. | Jan 1994 | A |
5386547 | Jouppi | Jan 1995 | A |
5440686 | Dahman et al. | Aug 1995 | A |
5564035 | Lai | Oct 1996 | A |
5664153 | Farrell | Sep 1997 | A |
5809560 | Schneider | Sep 1998 | A |
6115790 | Schimmel | Sep 2000 | A |
6185575 | Orcutt | Feb 2001 | B1 |
6205528 | Kingsbury | Mar 2001 | B1 |
6487562 | Mason, Jr. et al. | Nov 2002 | B1 |
6618793 | Rozario et al. | Sep 2003 | B2 |
6629200 | Kanamaru et al. | Sep 2003 | B1 |
6633954 | Don et al. | Oct 2003 | B1 |
6678793 | Doyle | Jan 2004 | B1 |
6715037 | Malcolm | Mar 2004 | B2 |
6728837 | Wilkes et al. | Apr 2004 | B2 |
6732242 | Hill et al. | May 2004 | B2 |
6748494 | Yashiro | Jun 2004 | B1 |
6795894 | Neufeld et al. | Sep 2004 | B1 |
6829698 | Arimilli et al. | Dec 2004 | B2 |
6829762 | Arimilli et al. | Dec 2004 | B2 |
6842847 | Arimilli et al. | Jan 2005 | B2 |
6851024 | Wilkes et al. | Feb 2005 | B1 |
6920514 | Arimilli et al. | Jul 2005 | B2 |
6925551 | Arimilli et al. | Aug 2005 | B2 |
7017031 | Arimilli et al. | Mar 2006 | B2 |
7213005 | Mourad et al. | May 2007 | B2 |
7213248 | Arimilli et al. | May 2007 | B2 |
7644239 | Ergan et al. | Jan 2010 | B2 |
7856530 | Mu | Dec 2010 | B1 |
8438339 | Krishna et al. | May 2013 | B2 |
8914579 | Maeda et al. | Dec 2014 | B2 |
9021048 | Luna et al. | Apr 2015 | B2 |
9043530 | Sundaram et al. | May 2015 | B1 |
9092156 | Xu et al. | Jul 2015 | B1 |
20020010836 | Barroso et al. | Jan 2002 | A1 |
20020038391 | Ido et al. | Mar 2002 | A1 |
20030101320 | Chauvel | May 2003 | A1 |
20050091457 | Auld et al. | Apr 2005 | A1 |
20050097272 | Jiang et al. | May 2005 | A1 |
20050172098 | Worley | Aug 2005 | A1 |
20060010169 | Kitamura | Jan 2006 | A1 |
20060026372 | Kim et al. | Feb 2006 | A1 |
20060218349 | Oe et al. | Sep 2006 | A1 |
20070011420 | Boss et al. | Jan 2007 | A1 |
20080256294 | Gill | Oct 2008 | A1 |
20100325352 | Schuette et al. | Dec 2010 | A1 |
20110099335 | Scott et al. | Apr 2011 | A1 |
20110125977 | Karr et al. | May 2011 | A1 |
20110161589 | Guthrie et al. | Jun 2011 | A1 |
20120079318 | Colgrove et al. | Mar 2012 | A1 |
20120089803 | Dice | Apr 2012 | A1 |
20120137055 | Lee et al. | May 2012 | A1 |
20120239871 | Badam et al. | Sep 2012 | A1 |
20120297113 | Belluomini et al. | Nov 2012 | A1 |
20120317312 | Elko et al. | Dec 2012 | A1 |
20130246767 | Bradbury et al. | Sep 2013 | A1 |
20130254462 | Whyte | Sep 2013 | A1 |
20130332660 | Talagala et al. | Dec 2013 | A1 |
20140013053 | Sawin et al. | Jan 2014 | A1 |
20140019650 | Li | Jan 2014 | A1 |
20140082288 | Beard et al. | Mar 2014 | A1 |
20140101370 | Chu et al. | Apr 2014 | A1 |
20140108764 | Li et al. | Apr 2014 | A1 |
20140115256 | Liu | Apr 2014 | A1 |
20140129779 | Frachtenberg et al. | May 2014 | A1 |
20140156935 | Raikin et al. | Jun 2014 | A1 |
20140201302 | Dube et al. | Jul 2014 | A1 |
20140223072 | Shivashankaraiah et al. | Aug 2014 | A1 |
20140244960 | Ise et al. | Aug 2014 | A1 |
20140289739 | Bodzsar et al. | Sep 2014 | A1 |
20140298317 | Devine | Oct 2014 | A1 |
20150026403 | Ish et al. | Jan 2015 | A1 |
20150046636 | Seo et al. | Feb 2015 | A1 |
20150178097 | Russinovich | Jun 2015 | A1 |
20150301931 | Ahmad et al. | Oct 2015 | A1 |
20150324294 | Ogawa | Nov 2015 | A1 |
20150378953 | Debbage et al. | Dec 2015 | A1 |
Entry |
---|
Office Action dated Dec. 11, 2015 for U.S. Appl. No. 14/319,440; 34 Pages. |
EMC Corporation, “EMC VSPEX with EMC XtremSF and EMC XtremSW Cache;” Design Guide; May 2013; 84 Pages. |
“Pointer Basics;” Retrieved on Dec. 17, 2015 from Stanford CS Education Library; https://web.archive.org/web/20120214194251/http://cslibrary.stanford.edu/106; 5 Pages. |
“Logical Unit No. (LUN);” Definition from Techopedia.com; Retrieved on Dec. 17, 2015; https://web.archive.org/web/20120113025245/http://www.techopedia.com/definition/321/logical-unit-number-lun; 2 Pages. |
Cooney et al., “Prioritization for Cache Systems;” U.S. Appl. No. 14/319,440, filed Jun. 30, 2014; 23 Pages. |
Michaud, “Methods and Apparatus for Direct Cache-Line Access to Attached Storage with Cache;” U.S. Appl. No. 14/318,939, filed Jun. 30, 2014; 16 Pages. |
Clark et al., “Second Caches for Memory and Page Caches;” U.S. Appl. No. 14/564,420, filed Dec. 9, 2014; 21 Pages. |
EMC Corporation, “Introduction to EMC Xtremcache;” White Paper; Nov. 2013; 33 Pages. |
EMC Corporation, “EMC VSPEX with EMC XTREMSF and EMC Xtremcache;” Design Guide; Dec. 2013; 95 Pages. |
Response to Jan. 21, 2016 Office Action for U.S. Appl. No. 14/318,939, filed Apr. 14, 2016; 7 pages. |
U.S. Office Action dated Feb. 25, 2016 corresponding to U.S. Appl. No. 14/314,366; 15 Pages. |
U.S. Appl. No. 14/501,096, filed Sep. 30, 2014, Michaud et al. |
U.S. Appl. No. 14/501,104, filed Sep. 30, 2014, Michaud et al. |
Tam et al., “mlcache: A Flexible Multi-Lateral Cache Simulator;” Proceedings of the 6th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems; Jul. 19-24, 1998; pp. 19-26; 8 Pages. |
Response to U.S. Office Action dated Dec. 11, 2015 corresponding to U.S. Appl. No. 14/319,440; Response filed on Feb. 26, 2016; 33 Pages. |
U.S. Final Office Action dated Jun. 3, 2016 corresponding to U.S. Appl. No. 14/319,440; 46 Pages. |
Response to U.S. Office Action dated Feb. 25, 2016 corresponding to U.S. Appl. No. 14/314,366; Response filed on May 25, 2016; 13 Pages. |
U.S. Office Action dated Feb. 4, 2016 corresponding to U.S. Appl. No. 14/501,096; 19 Pages. |
Response to U.S. Office Action dated Feb. 4, 2016 corresponding to U.S. Appl. No. 14/501,096; Response filed on Jun. 2, 2016; 10 Pages. |
U.S. Office Action dated Jan. 5, 2016 corresponding to U.S. Appl. No. 14/501,104; 23 Pages. |
Response to U.S. Office Action dated Jan. 5, 2016 corresponding to U.S. Appl. No. 14/501,104; Response filed on Jun. 2, 2016; 12 Pages. |
U.S. Appl. No. 14/318,939 Office Action dated Jan. 21, 2016, 9 pages. |
U.S. Non-Final Office Action dated Feb. 27, 2017 for U.S. Appl. No. 14/501,096; 24 Pages. |
U.S. Appl. No. 14/314,366 Notice of Allowance dated Jul. 1, 2016, 16 pages. |
U.S. Appl. No. 14/501,104 Final Office Action dated Jul. 29, 2016, 27 pages. |
U.S. Office Action dated Aug. 3, 2016 corresponding to U.S. Appl. No. 14/564,420; 31 Pages. |
U.S. Appl. No. 15/270,360, filed Sep. 20, 2016, Michaud et al. |
Response to Final Office Action dated Jun. 3, 2016 corresponding to U.S. Appl. No. 14/319,440; Response filed Sep. 22, 2016; 18 Pages. |
Advisory Action dated Sep. 30, 2016 corresponding to U.S. Appl. No. 14/319,440; 3 Pages. |
Response to Final Office Action dated Jul. 14, 2016 corresponding to U.S. Appl. No. 14/318,939; Response filed on Oct. 5, 2016; 7 Pages. |
Advisory Action dated Oct. 13, 2016 corresponding to U.S. Appl. No. 14/318,939; 3 Pages. |
Supplemental Response to Final Office Action dated Jul. 14, 2016 corresponding to U.S. Appl. No. 14/318,939; Response filed on Oct. 14, 2016; 7 Pages. |
U.S. Notice of Allowance dated Nov. 10, 2016 for U.S. Appl. No. 14/319,440; 21 Pages. |
Response to Office Action dated Aug. 3, 2016 for U.S. Appl. No. 14/564,420; Response filed on Nov. 3, 2016; 16 Pages. |
Response to Final Office Action dated Jul. 14, 2016 for U.S. Appl. No. 14/318,939; Response filed on Nov. 9, 2016; 8 Pages. |
Advisory Action dated Nov. 8, 2016 for U.S. Appl. No. 14/318,939; 3 Pages. |
Request for Continued Examination (RCE) and Response to Final Office Action dated Jul. 8, 2016 for U.S. Appl. No. 14/501,096; RCE and Response filed on Nov. 1, 2016; 15 Pages. |
Appeal Brief filed on Dec. 27, 2016; for U.S. Appl. No. 14/501,104; 18 pages. |
U.S. Appl. No. 14/501,096 Final Office Action dated Jul. 8, 2016, 24 pages. |
U.S. Appl. No. 14/318,939 Final Office Action dated Jul. 14, 2016, 11 pages. |
U.S. Final Office Action dated Feb. 10, 2017 for U.S. Appl. No. 14/564,420; 29 Pages. |
Examiner's Answer dated May 16, 2017 to Appeal Brief filed on Dec. 27, 2017 for U.S. Appl. No. 14/501,104; 7 Pages. |
Request for Continued Examination filed Jun. 12, 2017 for U.S. Appl. No. 14/564,420; 3 pages. |
Request for Extension of Time filed Jun. 12, 2017 for U.S. Appl. No. 14/564,420; 1 page. |
Amendment filed Jun. 12, 2017 for U.S. Appl. No. 14/564,420; 27 pages. |
Response to U.S. Non-Final Office Action dated Feb. 27, 2017 for U.S. Appl. No. 14/501,096; Response filed on Jun. 27, 2017; 13 Pages. |
Notice of Allowance dated Mar. 23, 2017 for U.S. Appl. No. 14/318,939; 11 Pages. |
Office Action dated Jul. 28, 2017 for U.S. Appl. No. 14/564,420; 24 pages. |
Office Action dated Sep. 1, 2017 from U.S. Appl. No. 15/270,360; 30 Pages. |
Response to Office Action filed on Oct. 26, 2017 for U.S. Appl. No. 14/564,420; 18 pages. |
Replacement Drawings filed on Oct. 26, 2017 for U.S. Appl. No. 14/564,420; 6 pages. |
Applicant Initiated Interview Summary dated Nov. 1, 2017 for U.S. Appl. No, 14/564,420; 3 pages. |
Notice of Appeal Filed Oct. 19, 2017 for U.S. Appl. No. 14/501,096; 2 pages. |
Response to U.S. Office Action dated Sep. 1, 2017 corresponding to U.S. Appl. No. 15/270,360; Response filed Dec. 1, 2017; 9 Pages. |
Appeal Brief filed Jan. 12, 2018 corresponding to U.S. Appl. No. 14/501,096; 19 Pages. |
U.S. Final Office Action dated Jan. 23, 2018 for U.S. Appl. No. 15/270,360; 20 Pages. |
U.S. Final Office Action dated Feb. 8, 2018 for U.S. Appl. No. 14/564,420; 22 Pages. |
U.S. Final Office Action dated Aug. 1, 2017 for U.S. Appl. No. 14/501,096; 24 Pages. |
Reply Brief to Examiner's Answer dated May 16, 2017 for U.S. Appl. No. 14/501,104; Reply Filed on Jul. 14, 2017; 10 Pages. |
U.S. Non-Final Office Action dated Jul. 28, 2017 for U.S. Appl. No. 14/564,420; 26 Pages. |
Response to Final Office Action dated Jan. 23, 2018 for U.S. Appl. No. 15/270,360; Response filed Apr. 23, 2018; 9 Pages. |
Examiners Answer to Appeal Brief dated Apr. 19, 2018 for U.S. Appl. No. 14/501,096; 9 Pages. |
Number | Date | Country | |
---|---|---|---|
62004163 | May 2014 | US |