This invention relates to computer networks and, more particularly, to efficiently storing data in a storage system.
As computer memory storage and data bandwidth increase, so does the amount and complexity of data that businesses daily manage. A distributed storage system may be coupled to client computers interconnected by one or more networks. If any portion of the distributed storage system has poor performance, company operations may be impaired. A distributed storage system therefore maintains high standards for data availability and high-performance functionality.
The distributed storage system comprises physical volumes, which may be solid-state devices or partitions of a storage device. Software applications, such as a logical volume manager or a disk array manager, provide a means of allocating space on mass-storage arrays. In addition, this software allows a system administrator to create units of storage groups including logical volumes. Storage virtualization provides an abstraction (separation) of logical storage from physical storage in order to access logical storage without end-users identifying physical storage.
To support storage virtualization, a volume manager performs input/output (I/O) redirection by translating incoming I/O requests using logical addresses from end-users into new requests using addresses associated with physical locations in the storage devices. As some storage devices may include additional address translation mechanisms, such as address translation layers that may be used in solid-state storage devices, the translation from a logical address to another address mentioned above may not represent the only or final address translation. Redirection utilizes metadata stored in one or more mapping tables. In addition, information stored in one or more mapping tables may be used for storage deduplication and mapping virtual sectors at a specific snapshot level to physical locations. As the amount of data to maintain in a storage system grows, the cost of storing the data likewise grows.
In view of the above, systems and methods for efficiently storing data in a storage system are desired.
Various embodiments of a computer system and methods for efficiently storing data in a storage system are contemplated.
In various embodiments, a data storage subsystem coupled to a network receives read and write requests on the network from a client computer. The data storage subsystem includes multiple data storage locations on multiple storage devices. The data storage subsystem also includes at least one mapping table. The mapping table includes a plurality of entries, with each of the entries including a tuple with a key. The entry may also include a pointer to a physical location within the multiple storage devices.
A data storage controller determines whether data to store in the storage subsystem has a repeating pattern. In some embodiments, repeating patterns are intermingled with non-pattern data. Rather than store the repeating pattern on the storage devices, the controller stores information in a header on the storage devices. The information provides an identification of the pattern and its location(s). In various embodiments, the information includes at least an offset for the first instance of the repeating pattern, a pattern length, and an identification of the pattern, and locations of the pattern data with respect to the intermingled non-pattern data. In this manner, multiple instances of the pattern need not be stored. Reads of the data result in reconstruction of the data from the information stored in the header.
These and other embodiments will become apparent upon consideration of the following description and accompanying drawings.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, one having ordinary skill in the art should recognize that the invention might be practiced without these specific details. In some instances, well-known circuits, structures, signals, computer program instruction, and techniques have not been shown in detail to avoid obscuring the present invention.
Referring to
It is noted that in alternative embodiments, the number and type of client computers and servers, switches, networks, data storage arrays, and data storage devices is not limited to those shown in
In the network architecture 100, each of the data storage arrays 120a-120b may be used for the sharing of data among different servers and computers, such as client computer systems 110a-110c. In addition, the data storage arrays 1206a-120b may be used for disk mirroring, backup and restore, archival and retrieval of archived data, and data migration from one storage device to another. In an alternate embodiment, one or more client computer systems 110a-110c may be linked to one another through fast local area networks (LANs) in order to form a cluster. Such clients may share a storage resource, such as a cluster shared volume residing within one of data storage arrays 120a-120b.
Each of the data storage arrays 120a-120b includes a storage subsystem 170 for data storage. Storage subsystem 170 may comprise a plurality of storage devices 176a-176m. These storage devices 176a-176m may provide data storage services to client computer systems 110a-110c. Each of the storage devices 176a-176m uses a particular technology and mechanism for performing data storage. The type of technology and mechanism used within each of the storage devices 176a-176m may at least in part be used to determine the algorithms used for controlling and scheduling read and write operations to and from each of the storage devices 176a-176m. For example, the algorithms may locate particular physical locations corresponding to the operations. In addition, the algorithms may perform input/output (I/O) redirection for the operations, removal of duplicate data in the storage subsystem 170, and support one or more mapping tables used for address redirection and deduplication.
The logic used in the above algorithms may be included in one or more of a base operating system (OS) 132, a volume manager 134, within a storage subsystem controller 174, control logic within each of the storage devices 176a-176m, or otherwise. Additionally, the logic, algorithms, and control mechanisms described herein may comprise hardware and/or software.
Each of the storage devices 176a-176m may be configured to receive read and write requests and comprise a plurality of data storage locations, each data storage location being addressable as rows and columns in an array. In one embodiment, the data storage locations within the storage devices 176a-176m may be arranged into logical, redundant storage containers or RAID arrays (redundant arrays of inexpensive/independent disks).
In some embodiments, each of the storage devices 176a-176m may include or be further coupled to storage consisting of solid-state memory to store persistent data. In one embodiment, the included solid-state memory comprises solid-state drive (SSD) technology. A Solid-State Disk (SSD) may also be referred to as a Solid-State Drive.
Storage array efficiency may be improved by creating a storage virtualization layer between user storage and physical locations within storage devices 176a-176m. In one embodiment, a virtual layer of a volume manager is placed in a device-driver stack of an operating system (OS), rather than within storage devices or in a network. A volume manager or a disk array manager is used to support device groups 173a-173m.
In one embodiment, one or more mapping tables may be stored in the storage devices 176a-176m, rather than memory, such as RAM 172, memory medium 130 or a cache within processor 122. The storage devices 176a-176 may be SSDs utilizing Flash memory. The low read access and latency times for SSDs may allow a small number of dependent read operations to occur while servicing a storage access request from a client computer. The dependent read operations may be used to access one or more indexes, one or more mapping tables, and user data during the servicing of the storage access request.
The information within a mapping table may be compressed. A particular compression algorithm may be chosen to allow identification of individual components, such as a key within a record among multiple records. Therefore, a search for a given key among multiple compressed records may occur. If a match is found, only the matching record may be decompressed. Compressing data within records of a mapping table may further enable fine-grained level mapping.
Network architecture 100 includes client computer systems 110a-110c interconnected through networks 180 and 190 to one another and to data storage arrays 120a-120b. Networks 180 and 190 may include a variety of techniques including wireless connection, direct local area network (LAN) connections, wide area network (WAN) connections such as the Internet, a router, storage area network, Ethernet, and others. Networks 180 and 190 may comprise one or more LANs that may also be wireless. Switch 140 may utilize a protocol associated with both networks 180 and 190. The network 190 may interface with a set of communications protocols used for the Internet 160 such as the Transmission Control Protocol (TCP) and the Internet Protocol (IP), or TCP/IP. Switch 150 may be a TCP/IP switch.
Client computer systems 110a-110c are representative of any number of stationary or mobile computers such as desktop personal computers (PCs), servers, server farms, workstations, laptops, handheld computers, servers, personal digital assistants (PDAs), smart phones, and so forth. Each of the client computer systems 110a-110c may include a hypervisor used to support virtual machines (VMs).
Each of the data storage arrays 120a-120b may be used for the sharing of data among different servers, such as the client computer systems 110a-110c. Each of the data storage arrays 120a-120b includes a storage subsystem 170 for data storage. Storage subsystem 170 may comprise a plurality of storage devices 176a-176m. Each of these storage devices 176a-176m may be an SSD. A controller 174 may comprise logic for handling received read/write requests. A random-access memory (RAM) 172 may be used to batch operations, such as received write requests. In various embodiments, when batching write operations (or other operations) non-volatile storage (e.g., NVRAM) may be used.
The base OS 132, the volume manager 134 (or disk array manager 134), any OS drivers (not shown) and other software stored in memory medium 130 may provide functionality providing access to files and the management of these functionalities. The base OS 132 and the OS drivers may comprise program instructions stored on the memory medium 130 and executable by processor 122 to perform one or more memory access operations in storage subsystem 170 that correspond to received requests. Each of the data storage arrays 120a-120b may use a network interface 124 to connect to network 180. Similar to client computer systems 110a-110c, in one embodiment, the functionality of network interface 124 may be included on a network adapter card.
In addition to the above, each of the storage controllers 174 within the data storage arrays 120a-120b may support storage array functions such as snapshots, replication and high availability. In addition, each of the storage controllers 174 may support a virtual machine environment that comprises a plurality of volumes with each volume including a plurality of snapshots. In one example, a storage controller 174 may support hundreds of thousands of volumes, wherein each volume includes thousands of snapshots. In one embodiment, a volume may be mapped in fixed-size sectors, such as a 4-kilobyte (KB) page within storage devices 176a-176m. In another embodiment, a volume may be mapped in variable-size sectors such as for write requests. A volume ID, a snapshot ID, and a sector number may be used to identify a given volume.
An address translation table may comprise a plurality of entries, wherein each entry holds a virtual-to-physical mapping for a corresponding data component. This mapping table may be used to map logical read/write requests from each of the client computer systems 110a-110c to physical locations in storage devices 176a-176m. A “physical” pointer value may be read from the mapping table during a lookup operation corresponding to a received read/write request. This physical pointer value may then be used to locate a physical location within the storage devices 176a-176m. It is noted the physical pointer value may be used to access another mapping table within a given storage device of the storage devices 176a-176m. Consequently, one or more levels of indirection may exist between the physical pointer value and a target storage location.
Referring to
In various embodiments, the number of bits in a detectable bit pattern may be programmable and the number of instances of a pattern to be detected may be programmable. For example, bit patterns of up to 4, 8, or some other number of bits may be identifiable. Numerous methods of identifying bit patterns are known in the art and are contemplated. For example, various embodiments may compare bits of data to predetermined patterns for identification. As a simple example, there are 16 possible combinations for a pattern of 4 bits (0000-1111). These 16 patterns could be maintained in a table, array, or otherwise. Alternatively, such patterns may be detected using binary logic. Still further, various forms of automata or state machines may be used to detect patterns. Numerous such approaches are possible and are contemplated. In some embodiments, detection logic may compare chunks of M-bytes where M is an integer greater than or equal to one. For example, the byte pattern 0x0A that repeats within a subset may be detected as a repeating pattern, where a single instance of the bit pattern 0x0A has a size of a byte. As used herein, the notation “0x” indicates a hexadecimal value. A comparison of a first byte and a contiguous second byte that results in a match (i.e., the bit pattern in the first byte matches that of the second) indicates at least a start of a repeating pattern. Similarly, a comparison of a first 2-byte value and a contiguous second 2-byte value that results in a match indicates at least a start of a repeating pattern.
In some embodiments, a programmable limit may be established for the maximum size of a pattern. For example, in an embodiment where a repeating pattern cannot exceed four bytes in size, a comparison of a first 4-byte value and a contiguous second 4-byte value that results in a match may indicate the start of a repeating pattern (i.e., a four byte pattern has been detected to occur twice). However, the pattern 0x12345678 0x12345678 that repeats within the subset would not be identified (or qualify) as a repeating pattern since the pattern length is 8 bytes.
In various embodiments, another limit or threshold may be used for the number of contiguous instances of a given pattern needed to qualify for a repeating pattern. For example, if such a threshold value is set at 4, four or more contiguous instances of a pattern would qualify as a repeating pattern, but two or three would not. In some embodiments, the threshold number of contiguous instances of the bit pattern need to qualify as a repeating pattern may be set to half of a subset. In yet other embodiments, the write request may include an indication and/or identification of patterns of data within a write request. The qualifications for identifying a series of repeating patterns may determine how the data is stored among the mapping table and the data storage.
In the example shown in
When the write data 210 is written to the data storage medium 230, each of a mapping table 220 and the data storage medium 230 may be updated. In the example shown, the mapping table 220 may typically include at least a key and a pointer. In one embodiment, the key may be an identifier for the write data 210 being stored in the data storage medium 230 and the pointer may be an identification (e.g., an address) corresponding to a location within the data storage medium 230 where the write data 210 is to be stored. For example, the data 210 may be stored as a block and the pointer may identifying an address (e.g., the beginning) of the block. In this example, the mapping table 220 has one entry corresponding to the write data 210 and all of the write data 210 is stored in the data storage medium 230.
The data storage medium 230 may represent an entire allocated block for a write operation or a subset of the allocated block. As shown, the data storage medium 230 stores each of the subsets of the write data 210. Additionally, the data storage medium 230 includes metadata 244. The metadata 244 may store data protection information, such as intra-device protection data, checksums, and/or otherwise. The metadata 244 may store log data. Additionally, the metadata 244 may store data location information such as volume identifiers, sector numbers, data chunk and offset numbers, track numbers, and so forth. Although the metadata 244 is shown at the top of the data storage medium 230, in other examples, the metadata 244 may be stored at the bottom of data storage medium 230. Alternatively, the information in the metadata 244 may be distributed at the top or bottom of the data storage medium 230 and within headers in each of the subsets.
Turning now to
Responsive to the write request, each of a mapping table 320 and the data storage medium 330 is updated. The mapping table 320 has multiple entries for the write data 210. In the example shown, each of the N subsets within the write data 210 has a corresponding entry in the mapping table 320. Each of the keys in the mapping table 320 is unique for a corresponding subset within the write data 210. Similarly, each of the pointers in the mapping table 320 is unique for a corresponding subset within the write data 210.
For subsets within the write data 210 that include a repeating pattern, the corresponding entries in the mapping table 320 store information identifying the pattern data. For example, one or more status fields may be set to indicate the stored data does not include a pointer value. Rather, at least an indication of the repeating pattern is stored. In various embodiments, a single instance of the pattern may be stored in an entry of the mapping table 320, along with a number of instances of the pattern. For example, if Subset 2 stores a repeating pattern of twenty instances of 0x4, an identification of the pattern 0x4 may be stored in the entry for Subset 2 along with an identification of the number of instances (twenty). In the example shown, only the non-pattern data is stored in the storage medium 330 requiring only half the storage of the example of
Referring now to
In the example shown, as only the non-pattern data is generally stored in the medium 230 the data storage medium 430, the required storage is approximately half that of
For example, subset 2 in the write data 210 may store a repeating pattern, such as 0x01 0x01. The metadata 444 may store a single instance of the pattern such as 0x01 or an instance of the repeating pattern 0x01 0x01. In addition, the metadata 444 may store an indication of a number of instances of the pattern or repeating pattern. Further, the metadata 444 may store an offset within the write data 210 of the repeating pattern. For example, if Subset 1 is at an offset of 0 in the write data 210, Subset 2 is at an offset of 1, Subset 3 is an offset of 2, and so on. In various embodiments, the metadata also identifies an offset for each of the non-pattern subsets of data. In this manner, the relative locations of both the pattern and non-pattern are known and the original data 210 can be reconstructed as needed. In such a manner, the write data 210 may be efficiently stored in a data storage medium with the sizes of the corresponding mapping table 220 and the corresponding data storage medium 430 being reduced from the size of the write data 210. In various embodiments, the efficient storage of the write data 210 may be performed in a distributed data storage system utilizing solid-state devices. For example, the network architecture 100 may use the efficient storage of the write data.
Turning now to
In block 502, a write request is received. In some embodiments, an indication of a series of patterns is provided in the write request. In other embodiments, patterns of data are detected. For example, control logic may compare contiguous chunks of M-bytes, wherein matches indicate a pattern. In various embodiments, the control logic is within the data storage controller 174, though it may be located elsewhere. The integer M may be any positive value from 1 to a limit, such as 4, in one example. A same byte pattern of 0x 00 that repeats within a given subset may be detected as a repeating pattern. In some embodiments, the subset is a sector in a SSD. A comparison of a first portion of data and a contiguous second portion of data that results in a match indicates at least a start of a repeating pattern.
If the control logic does not detect the write data of the write request has a series of patterns intermingled with non-pattern data (conditional block 504), then in block 506, a new mapping table entry is created with a pointer to (or other identification of) a location in the SSDs for the write data. Alternatively, if the entire write data is a series of patterns, the new mapping table entry includes an indication of the pattern. However, in other embodiments, the mapping table includes a pointer to the series of patterns in the write data and an identification of the pattern is stored in the storage medium rather than the actual entire write data. The write data may be the size of an allocated block that comprises a number of sectors. In some examples, the block includes 64 sectors. In other examples, the block includes 128 sectors. Any number of sectors, or subsets, may be used.
If the control logic does detect the write data of the write request has a series of patterns intermingled with non-pattern data (conditional block 504), and (in at least some embodiments) the size of the series of patterns is greater than a size threshold (conditional block 508), then in block 510, the offsets for at least the repeating pattern in the write data is determined. In some embodiments, the offsets may use the granularity of a subset or a sector. An indication of the length of the pattern and the pattern itself may be stored with other metadata. In some embodiments, a stride of offsets for at least one repeating pattern is determined. The stride may also be stored with the pattern and the length of the pattern. A stride of offsets for the non-pattern data may be additionally determined and stored with the pattern and the length of the pattern. For example, if the repeating pattern data occurs every other subset (or other unit) as was shown in
In block 514, header information is created with at least offsets, a possible stride of offsets, pattern lengths and patterns for the detected series of repeating patterns. Offsets or a stride of offsets for the non-pattern data may also be included. In block 516, a write operation is performed to the storage medium for the mapping and header information and the non-pattern data.
Turning now to
In block 602, a read request is received. A key generator may receive one or more requester data inputs. The received read request may identify a particular volume, sector and length. In block 604, the key generator may produce a query key value that includes a volume identifier (ID), a logical or virtual address, a snapshot ID, and a sector number. Other combinations are possible and other or additional values may be utilized as well. In block 606, different portions of the query key value may be compared to values stored in columns that may or may not be contiguous within a mapping table. In various embodiments, the mapping table is an address translation directory table. To provide the different portions of the query key value to the columns within the mapping table, one or more index tables were accessed beforehand.
In block 608, an associated mapping table entry is obtained. The mapping table result is used in block 610 to perform a storage access that corresponds to the target location of the original read request. If stored header information corresponding to the read request indicates the read data has patterns intermingled with non-pattern data (conditional block 612), then in block 614, the information such as offsets, strides of offsets, pattern lengths and patterns stored in the header information is used to reconstruct the requested data. Both non-pattern data and reconstructed pattern data may be combined to recreate the original write data. In block 616, the data corresponding to the target location of the read request is sent to the requester.
It is noted that the above-described embodiments may comprise software. In such an embodiment, the program instructions that implement the methods and/or mechanisms may be conveyed or stored on a computer readable medium. Numerous types of media which are configured to store program instructions are available and include hard disks, floppy disks, CD-ROM, DVD, flash memory, Programmable ROMs (PROM), random access memory (RAM), and various other forms of volatile or non-volatile storage.
In various embodiments, one or more portions of the methods and mechanisms described herein may form part of a cloud-computing environment. In such embodiments, resources may be provided over the Internet as services according to one or more various models. Such models may include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). In IaaS, computer infrastructure is delivered as a service. In such a case, the computing equipment is generally owned and operated by the service provider. In the PaaS model, software tools and underlying equipment used by developers to develop software solutions may be provided as a service and hosted by the service provider. SaaS typically includes a service provider licensing software as a service on demand. The service provider may host the software, or may deploy the software to a customer for a given period of time. Numerous combinations of the above models are possible and are contemplated.
Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
This is a Continuation Application of U.S. patent application Ser. No. 15/861,279, filed Jan. 3, 2018, which is a Continuation Application of U.S. Pat. No. 9,864,769, issued Jan. 9, 2018.
Number | Name | Date | Kind |
---|---|---|---|
4586027 | Tsukiyama et al. | Apr 1986 | A |
5208813 | Stallmo | May 1993 | A |
5403639 | Belsan et al. | Apr 1995 | A |
5715446 | Kinoshita et al. | Feb 1998 | A |
5940838 | Schmuck et al. | Aug 1999 | A |
6263350 | Wollrath et al. | Jul 2001 | B1 |
6412045 | DeKoning et al. | Jun 2002 | B1 |
6718448 | Ofer | Apr 2004 | B1 |
6757769 | Ofer | Jun 2004 | B1 |
6799283 | Tamai et al. | Sep 2004 | B1 |
6834298 | Singer et al. | Dec 2004 | B1 |
6850938 | Sadjadi | Feb 2005 | B1 |
6915434 | Kuroda et al. | Jul 2005 | B1 |
6973549 | Festardi | Dec 2005 | B1 |
7028216 | Aizawa et al. | Apr 2006 | B2 |
7028218 | Schwarm et al. | Apr 2006 | B2 |
7039827 | Meyer et al. | May 2006 | B2 |
7216164 | Whitmore et al. | May 2007 | B1 |
7783682 | Patterson | Aug 2010 | B1 |
7873619 | Faibish et al. | Jan 2011 | B1 |
7913300 | Flank et al. | Mar 2011 | B1 |
7933936 | Aggarwal et al. | Apr 2011 | B2 |
7979613 | Zohar et al. | Jul 2011 | B2 |
8086652 | Bisson et al. | Dec 2011 | B1 |
8117464 | Kogelnik | Feb 2012 | B1 |
8200887 | Bennett | Jun 2012 | B2 |
8205065 | Matze | Jun 2012 | B2 |
8209334 | Doerner | Jun 2012 | B1 |
8352540 | Anglin et al. | Jan 2013 | B2 |
8527544 | Colgrove et al. | Sep 2013 | B1 |
8560747 | Tan et al. | Oct 2013 | B1 |
8621241 | Stephenson | Dec 2013 | B1 |
8700875 | Barron et al. | Apr 2014 | B1 |
8751463 | Chamness | Jun 2014 | B1 |
8806160 | Colgrove et al. | Aug 2014 | B2 |
8874850 | Goodson et al. | Oct 2014 | B1 |
8959305 | LeCrone et al. | Feb 2015 | B1 |
9081713 | Bennett | Jul 2015 | B1 |
9189334 | Bennett | Nov 2015 | B2 |
9311182 | Bennett | Apr 2016 | B2 |
9423967 | Colgrove et al. | Aug 2016 | B2 |
9436396 | Colgrove et al. | Sep 2016 | B2 |
9436720 | Colgrove et al. | Sep 2016 | B2 |
9454476 | Colgrove et al. | Sep 2016 | B2 |
9454477 | Colgrove et al. | Sep 2016 | B2 |
9513820 | Shalev | Dec 2016 | B1 |
9516016 | Colgrove et al. | Dec 2016 | B2 |
9552248 | Miller et al. | Jan 2017 | B2 |
9632870 | Bennett | Apr 2017 | B2 |
20020038436 | Suzuki | Mar 2002 | A1 |
20020087544 | Selkirk et al. | Jul 2002 | A1 |
20020178335 | Selkirk et al. | Nov 2002 | A1 |
20030001686 | Sekiya | Jan 2003 | A1 |
20030140209 | Testardi | Jul 2003 | A1 |
20040010655 | Tanaka et al. | Jan 2004 | A1 |
20040049572 | Yamamoto et al. | Mar 2004 | A1 |
20050066095 | Mullick et al. | Mar 2005 | A1 |
20050216535 | Saika et al. | Sep 2005 | A1 |
20050223154 | Uemura | Oct 2005 | A1 |
20060074940 | Craft et al. | Apr 2006 | A1 |
20060136365 | Kedem et al. | Jun 2006 | A1 |
20060155946 | Ji | Jul 2006 | A1 |
20070067585 | Ueda et al. | Mar 2007 | A1 |
20070162954 | Pela | Jul 2007 | A1 |
20070171562 | Maejima et al. | Jul 2007 | A1 |
20070174673 | Kawaguchi et al. | Jul 2007 | A1 |
20070220313 | Katsuragi et al. | Sep 2007 | A1 |
20070245090 | King et al. | Oct 2007 | A1 |
20070266179 | Chavan et al. | Nov 2007 | A1 |
20080059699 | Kubo et al. | Mar 2008 | A1 |
20080065852 | Moore et al. | Mar 2008 | A1 |
20080134174 | Sheu et al. | Jun 2008 | A1 |
20080141279 | Mattson | Jun 2008 | A1 |
20080155191 | Anderson et al. | Jun 2008 | A1 |
20080178040 | Kobayashi | Jul 2008 | A1 |
20080209096 | Lin et al. | Aug 2008 | A1 |
20080244205 | Amano et al. | Oct 2008 | A1 |
20080275928 | Shuster | Nov 2008 | A1 |
20080285083 | Aonuma | Nov 2008 | A1 |
20080307270 | Li | Dec 2008 | A1 |
20090006587 | Richter | Jan 2009 | A1 |
20090037662 | Frese et al. | Feb 2009 | A1 |
20090204858 | Kawaba | Aug 2009 | A1 |
20090228648 | Wack | Sep 2009 | A1 |
20090300084 | Whitehouse | Dec 2009 | A1 |
20100057673 | Savov | Mar 2010 | A1 |
20100058026 | Heil et al. | Mar 2010 | A1 |
20100067706 | Anan et al. | Mar 2010 | A1 |
20100077205 | Ekstrom et al. | Mar 2010 | A1 |
20100082879 | McKean et al. | Apr 2010 | A1 |
20100095162 | Inakoshi | Apr 2010 | A1 |
20100106905 | Kurashige et al. | Apr 2010 | A1 |
20100153620 | McKean et al. | Jun 2010 | A1 |
20100153641 | Jagadish et al. | Jun 2010 | A1 |
20100191897 | Zhang et al. | Jul 2010 | A1 |
20100250802 | Waugh et al. | Sep 2010 | A1 |
20100250882 | Hutchison et al. | Sep 2010 | A1 |
20100281225 | Chen et al. | Nov 2010 | A1 |
20100287327 | Li et al. | Nov 2010 | A1 |
20110072300 | Rousseau | Mar 2011 | A1 |
20110145598 | Smith et al. | Jun 2011 | A1 |
20110161559 | Yurzola et al. | Jun 2011 | A1 |
20110167221 | Pangal et al. | Jul 2011 | A1 |
20110218972 | Tofano | Sep 2011 | A1 |
20110238634 | Kobara | Sep 2011 | A1 |
20120023375 | Dutta et al. | Jan 2012 | A1 |
20120036309 | Dillow et al. | Feb 2012 | A1 |
20120117029 | Gold | May 2012 | A1 |
20120131025 | Cheung | May 2012 | A1 |
20120198175 | Atkisson | Aug 2012 | A1 |
20120330954 | Sivasubramanian et al. | Dec 2012 | A1 |
20130042052 | Colgrove et al. | Feb 2013 | A1 |
20130046995 | Movshovitz | Feb 2013 | A1 |
20130047029 | Ikeuchi et al. | Feb 2013 | A1 |
20130091102 | Nayak | Apr 2013 | A1 |
20130205110 | Kettner | Aug 2013 | A1 |
20130227236 | Flynn et al. | Aug 2013 | A1 |
20130275391 | Batwara et al. | Oct 2013 | A1 |
20130275656 | Talagala et al. | Oct 2013 | A1 |
20130283058 | Fiske et al. | Oct 2013 | A1 |
20130290648 | Shao et al. | Oct 2013 | A1 |
20130318314 | Markus et al. | Nov 2013 | A1 |
20130339303 | Potter et al. | Dec 2013 | A1 |
20140052946 | Kimmel | Feb 2014 | A1 |
20140068791 | Resch | Mar 2014 | A1 |
20140089730 | Watanabe et al. | Mar 2014 | A1 |
20140101361 | Gschwind | Apr 2014 | A1 |
20140143517 | Jin et al. | May 2014 | A1 |
20140172929 | Sedayao et al. | Jun 2014 | A1 |
20140201150 | Kumarasamy et al. | Jul 2014 | A1 |
20140215129 | Kuzmin et al. | Jul 2014 | A1 |
20140229131 | Cohen et al. | Aug 2014 | A1 |
20140229452 | Serita | Aug 2014 | A1 |
20140281308 | Lango et al. | Sep 2014 | A1 |
20140325115 | Ramsundar et al. | Oct 2014 | A1 |
20150120666 | Otsuka | Apr 2015 | A1 |
20150178193 | Song et al. | Jun 2015 | A1 |
20150234709 | Koarashi | Aug 2015 | A1 |
20150244775 | Vibhor et al. | Aug 2015 | A1 |
20150278534 | Thiyagarajan et al. | Oct 2015 | A1 |
20160019114 | Han et al. | Jan 2016 | A1 |
20160098191 | Golden et al. | Apr 2016 | A1 |
20160098199 | Golden et al. | Apr 2016 | A1 |
20160171029 | Sanvido et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
103370685 | Oct 2013 | CN |
103370686 | Oct 2013 | CN |
104025010 | Nov 2016 | CN |
2717153 | Apr 2014 | EP |
3066610 | Sep 2016 | EP |
3082047 | Oct 2016 | EP |
3120235 | Jan 2017 | EP |
2007087036 | Apr 2007 | JP |
2007094472 | Apr 2007 | JP |
2008250667 | Oct 2008 | JP |
2010211681 | Sep 2010 | JP |
1995002349 | Jan 1995 | WO |
1999013403 | Mar 1999 | WO |
2008102347 | Aug 2008 | WO |
2010071655 | Jun 2010 | WO |
Entry |
---|
International Search Report and Written Opinion, PCT/US2015/063444, dated Mar. 3, 2016, 10 pages. |
Microsoft Corporation, “Fundamentals of Garbage Collection”, Retrieved Aug. 30, 2013 via the WayBack Machine, 11 pages. |
Microsoft Corporation, “GCSettings.IsServerGC Property”, Retrieved Oct. 27, 2013 via the WayBack Machine, 3 pages. |
Number | Date | Country | |
---|---|---|---|
Parent | 15861279 | Jan 2018 | US |
Child | 16936172 | US | |
Parent | 14569624 | Dec 2014 | US |
Child | 15861279 | US |