This disclosure relates generally to data processing, and more specifically, to data scrubbing in cluster-based storage systems.
The approaches described in this section could be pursued but are not necessarily approaches that have previously been conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
A typical computer storage system includes one or more memory devices that are configured to store digital data associated with software, digital documents, and other resources. For example, a memory device may include a mechanical hard disk drive (HHD), solid-state drive (SSD), such as NAND (Negated AND or NOT AND) flash, random access memory (RAM), read only memory (ROM), or other types of devices. Each type of memory device may be suitable for a particular purpose, performance range, and operational environment.
In general, memory devices are subject to failure, and thus data stored on various memory devices (magnetically, optically, electronically, and so on) may experience various decays. For example, data stored magnetically (e.g., on HDD) may be lost due to magnetic fields decay. Furthermore, data stored by altering material structure (e.g., SSD) may be lost due to further change in its structure. Both HDDs and SSDs may be prone to physical damage, complete and partial failures to some sections. Other issues, such as firmware and software bugs may also cause various issues to data stored on memory devices.
One common solution to alleviate these issues is to duplicate data across redundant disk drives. One such redundant drive approach is facilitated by the Redundant Array of Independent Disks (RAID). Multiple physical disks comprise an array where parity data is added to the original data before storing the data across the array. The parity is calculated such that the failure of one or more disks will not result in the loss of the original data. The original data can be reconstructed from a functioning disk if one or more disks fail. The RAID may use, for example, three or more disks to protect data from failures of any of the disks.
Because the RAID involves replicating data at the storage level, it may propagate errors across multiple copies if the copied data already had errors when copied. Other shortcomings of traditional RAID solution may be associated with editing data in-place, which may cause creating additional errors in the data objects being edited or creating errors in nearby data objects. For example, writing to a SSD device may affect structure of nearby material leading to errors in other data sectors.
Accordingly, there is a need to develop a storage technique that minimizes adverse effects of storage device failures, provides improved efficiency, and enhances protection against data loss.
This summary is provided to introduce a selection of concepts in a simplified form that are further described in the Detailed Description below. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
According to an aspect of the present disclosure, a method is provided for data scrubbing in a cluster-based storage system. The method may include maintaining the cluster-based storage system including a plurality of storage devices, which are configured to maintain a plurality of data objects and related information as described herein. Further, the method may include storing a plurality of replicas associated with a data object in a predefined number of the storage devices, for example, one replica per storage device. The method may also include storing a plurality of hash values associated with each replica in the plurality of storage devices, for example, one hash value per storage device.
In certain embodiments, the hash values may be computed over an object data using a hash function which may guarantee that different object values are statistically likely to result in different hashes. For example, a cryptographic hash function SHA1 can be used. SHA1 generates a hash value with 2160 different possible values, thereby making it statistically unlikely that two or more data objects with different content will result in the same hash values.
In certain embodiments, the hash value of each data object may be calculated before the data object is delivered to the cluster to ensure that hash value is calculated before any corruptions within the cluster. In general, the closer the hash value is computed to the origin of the data object, the more types of failures can be remedied. In yet other embodiments, a hash value for each data object can be calculated and then delivered along with its corresponding data object to the cluster-based storage system. Using this approach, it may be ensured that the data object does not get corrupted while in transit from a client to the cluster-based storage system.
Furthermore, the method for data scrubbing may also include loading a first hash value of a plurality of hash values from one of the storage devices. The method may further include loading a replica of the data object corresponding to the first hash value. The method may also include calculating a second hash value with respect to the first data object. The method may also include comparing the first hash value and the second hash value to identify a corruption of either the data object or the first hash value. Based on the comparison (e.g., if the first hash value differs from the second hash value), the method may proceed to restoring, by the one or more processors, the data object, based at least in part on one replica of the plurality of replicas.
In certain embodiments, the cluster-based storage system may include a plurality of storage devices for storing data objects. Furthermore, each storage device may be divided into multiple index sections and data sections, wherein an index section may store hash values and sizes of corresponding data objects stored in the data sections. In certain embodiments, the method for data scrubbing may commence with generating a hash value of a data object, writing one or more data object replicas and the hash value to one or more storage devices, storing the one or more data objects in data sections and the corresponding hash values in index sections.
It should be understood that generating the hash value and writing the one or more data objects is not strictly necessary to accomplish the goals of the claimed invention. Thus, the hash value may have already been generated and data object written.
In certain embodiments, the hash values may include digital signatures of the data objects. Examples of signature algorithms resulting in digital signatures include SHA-1, SHA-3, TIGER, and so forth. In certain embodiments, the method may include periodic scanning of the storage devices. During the scanning, the previously mentioned steps for loading the hash value and corresponding data object are performed for each replica stored in all storage devices. Accordingly, each data object replica may be assessed on its validity by comparing the loaded hash value and recomputed hash value. If there is no difference between the two, the corresponding data object is considered valid. Otherwise, if the first hash value differs from the second hash value, the data object is replaced with an already verified replica. In this regard, it is important that each data object replica is already verified.
In certain embodiments, when it is verified that no data object replicas of a certain data object are free from corruption, the method may proceed to attempt a computational recovery process.
In particular, according to a first computational recovery process, single bit errors in hash values, which include “strong” digital signatures generated by methods mentioned above, may be detected, and corrected without prior knowledge of whether there are also errors in a corresponding data object. To this end, the method may first compute a hash value of the data object, and then adjust the hash value by performing flipping (inversing) of bits of the hash value one at a time, comparing the adjusted hash value to the computed hash value. This process may be repeated until there is a match or every single bit flip has been tried. If a match is found, the corruption has been detected, and a new hash value can be written to a corresponding storage device.
Alternatively, if no match is found, according to a second computational recovery process, the hash value retrieved from the storage device can be considered to be correct, and the algorithm can proceed to attempt to recover from a single bit error in the data object. To this end, the method may adjust the data object by performing flipping (inversing) bits of the stored data object value one at a time, computing a hash value of the adjusted data object by comparing the adjusted hash value to the stored hash value. This process may be repeated until there is a match or every single bit flip has been tried. If a match is found, the corruption has been detected and the new data object can be written back to a storage device.
In certain embodiments, if both of the above first and second computational recovery processes fail to find a match, it may mean that more than one bit in the hash value or data object may need to be flipped to identify the error, following the same scheme as above. Thus, in certain embodiments, the above described computational recovery processes may be repeated, but more than one bit may be flipped at a time.
In certain embodiments, this algorithm can be run in parallel. In other words, there may be multiple parallel running computational recovery processes, each flipping a specific number of bits in hash values and data objects.
In various embodiments, the scanning of the storage devices may be performed regularly, however the frequency of scanning may be based upon one or more predetermined rules or criteria. In example embodiments, the frequency of scanning may depend on an age of storage device, time of use, storage device health reports, operational state (e.g., idle state), and so forth. In various embodiments, the scanning of the storage devices may be opportunistic, that is performed during unrelated read operations. In other words, the scanning process may take advantage of read operations initiated by unrelated accesses, such as retrieving an object at the request of an end user.
In further example embodiments of the present disclosure, there is provided a file system configured to implement the method steps described herein. In yet other example embodiments of the present disclosure, the method steps are stored on a machine-readable medium comprising instructions, which when implemented by one or more processors perform the recited steps. In yet further example embodiments, hardware systems, or devices can be adapted to perform the recited steps. Other features, examples, and embodiments are described below.
Embodiments are illustrated by way of example, and not by limitation, in the figures of the accompanying drawings, in which like references indicate similar elements.
The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical, and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is therefore not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. In this document, the terms “a” and “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.
Techniques of the embodiments disclosed herein may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium such as a disk drive, or computer-readable medium. It should be noted that methods disclosed herein can be implemented by a computer (e.g., a server, desktop computer, tablet computer, laptop computer), game console, handheld gaming device, cellular phone, smart phone, smart television system, and so forth.
The technology described herein relates to data scrubbing in cluster-based storage systems. This technology allows protecting data against failures of storage devices by periodical reading data object replicas stored in a plurality of storage devices and rewriting data object replicas that have been damaged. The present disclosure addresses aspects of writing data object replicas, checking validity of data object replicas, and performing data scrubbing based upon the results of the checking.
As shown in the figure, each of the storage nodes 105 includes a processing unit 110 such as a processor, controller, microprocessor, microcontroller, logic, central processing unit (CPU), or any other similar computing device. In certain embodiments, the processing unit 110 may reside in a user device or in network environment (e.g., in a server). Each of the storage nodes 105 also includes a plurality of storage devices 120 configured to store data objects and related information such as hash values and digital signatures. Each of the storage nodes 105 may also include a file management unit 130. The file management unit 130 may be configured to manage transfers and storage of data objects in the plurality of storage devices 120. In yet other examples, however, there may be just one file management unit 130 for the entire system 100.
According to various embodiments of the present disclosure, a certain number of data object 230 replicas may be generated and stored on multiple chassis 210A-C, one data object 230 per chassis. Each chassis the data object 230 is replicated to is responsible for storing on an internal storage device 120 the data object 230 and the data object hash. If there are not enough chassis to achieve the desired replication factor, a chassis may store additional replicas on other internal storage devices 120. For example, the processing unit 110 may generate three replicas of this data object 230, and send one replica to each chassis 210B-C, and store one replica on an internal storage device 120. Upon receiving the data object on chassis 210B-C, each chassis will store one replica on an internal storage device 120. The processing unit 110, in certain embodiments, may also be a part of the chassis 210A-C.
The present technology ensures that data objects 230 are never overwritten. Thus, new versions of the data objects 230 may be written to previously unused disk space, and locations of old versions may not be added to the unused space until after the new version has been committed. Traditional storage systems, in contrast, may edit data in-place, thereby overwriting stored data.
As shown in
The method 500 may commence at operation 505 with the processing unit 110 accessing and scanning the index section 310. At operation 510, the processing unit 110 may load the hash value (hereinafter referred to as “first hash value”) from a storage device 120. At operation 515, the processing unit 110 may load the data object replica stored from the same storage device 120. At operation 520, the processing unit 110 may calculate a second hash value with respect to the loaded data object. At operation 530, the processing unit 110 may compare the first and second hash values. If it is determined that the first hash value and the second hash value are equal, as shown at operation 540, there is no error in the data object replica and in the hash value stored in the storage device 120. If, on the other hand, it is determined that the first hash value differs from the second hash value, the data object replica stored in the storage device 120 is considered invalid, and the method may proceed to operation 550.
At operation 550, corresponding objects in other replicas can be checked using the same process as described above. If, at operation 560, it is determined that at least one valid object or uncorrupted hash exists in the storage devices 120, at operation 570, the invalid data object or corrupted hash can be replaced with the correct version. If no valid object is found in the storage devices 120, then the method proceeds to operation 580, when techniques for correcting bit rot can be tried. The foregoing techniques are described in more detail below with references to
The method 500 can be performed for each data object replica stored in the storage devices 120A-120C independently. Additionally, the hash value written to the storage device can be itself verified by using the method 500. Those skilled in the art will appreciate that the method 500 may be repeated on a regular basis for every storage device 120.
According to various embodiments of the present disclosure, the storage devices 120 may have a minimum data size for which any changes can be made. For example, many HDDs may be divided in sectors of 512 bytes, thus even if only a single bit of a sector becomes unusable, the data stored in the entire data sector should be moved elsewhere. Additionally, some types of data corruption on storage devices 120 may result in multiple contiguous sectors becoming unusable. In this regard, more than one data object and/or hash may be affected. Thus, the present technology allows for intelligent data object scrubbing involving rewriting multiple data sectors and multiple hashes and multiple data objects, when required. This may allow moving full data objects, without unwanted splitting of them among multiple data objects or storage devices.
According to various embodiments of the present disclosure, the corrected data object and its corresponding index section are written to a new area of the storage device 120 which was previously unused.
In yet more embodiments, statistics may be recorded and regions that show repeated corruptions exceeding a threshold may be retired and never again used.
According to another aspect of the present technology and as already mentioned above, each index region can include a digital signature (a hash value, such as a SHA value) of the index section. Thus, before hash values are re-computed and compared to the stored hashes as described in the method 500, the index itself can be verified, and corruptions fixed in the same manner as previously described.
In various embodiments, the method 500 may be performed periodically, however the frequency for performing this method may be based on predetermined criteria as described below. In some examples, if an error is found and cannot be fixed, the techniques for data scrubbing described herein may be postponed for a predetermined time period, since storage devices 120 may fix their own errors periodically. In other examples, the scrubbing method can be started by the end-user manually, for example because the end-user suspects a corruption.
Frequency of scanning (see operation 505) may be based on the expected benefit because sometimes new issues can be created by the scanning. In addition to periodic scanning, opportunistic scanning can be performed. For example, if one or more data objects are read during an unrelated operation, the present technology can take advantage of that and perform the scanning and data scrubbing while this data is being read. Thus, next time data scrubbing is performed, it does not need to read these data objects again.
In various embodiments, the storage devices 120 can be divided into regions and various statistics can be kept on each region. Some examples include error counts, time of last scrub, time of last write, time of last read, threshold till retirement, and so forth. These statistics can be used to calculate a future time when each region should be re-scrubbed. Some storage devices 120 can report statistics regarding their own health so that a decision as to when to perform the data scrubbing can be based on this reported data. For example, age of device, expected lifetime, remaining lifetime, error detection rate, error correction rate, number of remapped sectors, and so forth can be reported. In contrast to HDDs, SSDs can better predict their own failures. Accordingly, data scrubbing frequency can be based on the type of the storage device and the information provided by the storage device.
It should also be mentioned that when the method 500 is performed, it may impact processes using the storage devices 120 by slowing them down. Therefore, the number of storage devices that are scrubbed at any given time and how “aggressively” they are scrubbed depend on various factors. For example, the storage devices can be scrubbed only when they are otherwise idle, all the time, only at light loads, opportunistically, or depending on other parameters concerning system loads and user response time.
In various embodiments, the bitrot techniques may be independently performed on each storage device, as each device might contain differing failures. In other embodiments it may be performed once and all devices containing corrupt data objects or indexes be rewritten with the correct data.
The example computer system 800 includes a processor or multiple processors (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory 810 and a static memory 815, which communicate with each other via a bus 820. The computer system 800 may also include at least one input device 830, such as an alphanumeric input device (e.g., a keyboard), a cursor control device (e.g., a mouse), and so forth. The computer system 800 may also include a disk drive unit 835 and a network interface device 845.
The disk drive unit 835 includes a computer-readable medium 850, which stores one or more sets of instructions and data structures (e.g., instructions 855) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 855 can also reside, completely or at least partially, within the main memory 810 and/or within the processors 805 during execution thereof by the computer system 800. The main memory 810 and the processors 805 also constitute machine-readable media.
The instructions 855 can further be transmitted or received over the network 860 via the network interface device 845 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP), CAN, Serial, and Modbus). For example, the network 860 may include one or more of the following: the Internet, local intranet, PAN (Personal Area Network), LAN (Local Area Network), WAN (Wide Area Network), MAN (Metropolitan Area Network), virtual private network (VPN), storage area network (SAN), frame relay connection, Advanced Intelligent Network (AIN) connection, synchronous optical network (SONET) connection, digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, Ethernet connection, ISDN (Integrated Services Digital Network) line, cable modem, ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection. Furthermore, communications may also include links to any of a variety of wireless networks including, GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access), cellular phone networks, GPS, CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network.
While the computer-readable medium 850 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media can also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks (DVDs), random access memory (RAM), read only memory (ROM), and the like.
The example embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware. The computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems. Although not limited thereto, computer software programs for implementing the present method can be written in any number of suitable programming languages such as, for example, Hypertext Markup Language (HTML), Dynamic HTML, Extensible Markup Language (XML), Extensible Stylesheet Language (XSL), Document Style Semantics and Specification Language (DSSSL), Cascading Style Sheets (CSS), Synchronized Multimedia Integration Language (SMIL), Wireless Markup Language (WML), Java™, Jini™, C, C++, Go, Perl, UNIX Shell, Visual Basic or Visual Basic Script, Virtual Reality Markup Language (VRML), ColdFusion™ or other compilers, assemblers, interpreters or other computer languages or platforms.
Thus, methods and systems for data scrubbing are disclosed. Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The present application claims benefit of U.S. provisional application No. 61/837,078, filed on Jun. 19, 2013. The disclosure of the aforementioned application is incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4656604 | van Loon | Apr 1987 | A |
4660130 | Bartley et al. | Apr 1987 | A |
5420999 | Mundy | May 1995 | A |
5561778 | Fecteau et al. | Oct 1996 | A |
6098079 | Howard | Aug 2000 | A |
6154747 | Hunt | Nov 2000 | A |
6167437 | Stevens et al. | Dec 2000 | A |
6314435 | Wollrath et al. | Nov 2001 | B1 |
6356916 | Yamatari et al. | Mar 2002 | B1 |
6480950 | Lyubashevskiy et al. | Nov 2002 | B1 |
6772162 | Waldo et al. | Aug 2004 | B2 |
7043494 | Joshi et al. | May 2006 | B1 |
7177980 | Milillo et al. | Feb 2007 | B2 |
7197622 | Torkelsson et al. | Mar 2007 | B2 |
7266555 | Coates et al. | Sep 2007 | B1 |
7293140 | Kano | Nov 2007 | B2 |
7392421 | Bloomstein et al. | Jun 2008 | B1 |
7403961 | Deepak et al. | Jul 2008 | B1 |
7454592 | Shah et al. | Nov 2008 | B1 |
7509360 | Wollrath et al. | Mar 2009 | B2 |
7539836 | Klinkner | May 2009 | B1 |
7685109 | Ransil et al. | Mar 2010 | B1 |
7725437 | Kirshenbaum et al. | May 2010 | B2 |
7827218 | Mittal | Nov 2010 | B1 |
7990979 | Lu et al. | Aug 2011 | B2 |
8019882 | Rao et al. | Sep 2011 | B2 |
8099605 | Billsrom et al. | Jan 2012 | B1 |
8132168 | Wires et al. | Mar 2012 | B2 |
8239584 | Rabe et al. | Aug 2012 | B1 |
8364887 | Wong et al. | Jan 2013 | B2 |
8407438 | Ranade | Mar 2013 | B1 |
8447733 | Sudhakar | May 2013 | B2 |
8572290 | Mukhopadhyay et al. | Oct 2013 | B1 |
8868926 | Hunt et al. | Oct 2014 | B2 |
9009202 | Patterson | Apr 2015 | B2 |
9043567 | Modukuri et al. | May 2015 | B1 |
20020069340 | Tindal et al. | Jun 2002 | A1 |
20020087590 | Bacon et al. | Jul 2002 | A1 |
20030028514 | Lord et al. | Feb 2003 | A1 |
20030028585 | Yeager et al. | Feb 2003 | A1 |
20030056139 | Murray et al. | Mar 2003 | A1 |
20030072259 | Mor | Apr 2003 | A1 |
20030101173 | Lanzatella et al. | May 2003 | A1 |
20030115408 | Milillo et al. | Jun 2003 | A1 |
20040093361 | Therrien et al. | May 2004 | A1 |
20040158588 | Pruet | Aug 2004 | A1 |
20040167898 | Margolus et al. | Aug 2004 | A1 |
20050071335 | Kadatch | Mar 2005 | A1 |
20050080928 | Beverly et al. | Apr 2005 | A1 |
20050081041 | Hwang | Apr 2005 | A1 |
20050160170 | Schreter | Jul 2005 | A1 |
20050256972 | Cochran et al. | Nov 2005 | A1 |
20060039371 | Castro et al. | Feb 2006 | A1 |
20060083247 | Mehta | Apr 2006 | A1 |
20060156396 | Hochfield et al. | Jul 2006 | A1 |
20060271540 | Williams | Nov 2006 | A1 |
20070005746 | Roe et al. | Jan 2007 | A1 |
20070130232 | Therrien et al. | Jun 2007 | A1 |
20070203960 | Guo | Aug 2007 | A1 |
20070230368 | Shi et al. | Oct 2007 | A1 |
20070233828 | Gilbert | Oct 2007 | A1 |
20070271303 | Menendez et al. | Nov 2007 | A1 |
20070276838 | Abushanab et al. | Nov 2007 | A1 |
20070276843 | Lillibridge et al. | Nov 2007 | A1 |
20080005624 | Kakivaya et al. | Jan 2008 | A1 |
20080016507 | Thomas et al. | Jan 2008 | A1 |
20080126434 | Uysal et al. | May 2008 | A1 |
20080133893 | Glew | Jun 2008 | A1 |
20080147872 | Regnier | Jun 2008 | A1 |
20080170550 | Liu et al. | Jul 2008 | A1 |
20080183973 | Aguilera et al. | Jul 2008 | A1 |
20080243879 | Gokhale | Oct 2008 | A1 |
20080243938 | Kottomtharayil et al. | Oct 2008 | A1 |
20080244199 | Nakamura et al. | Oct 2008 | A1 |
20080292281 | Pecqueur et al. | Nov 2008 | A1 |
20090049240 | Oe et al. | Feb 2009 | A1 |
20090100212 | Boyd et al. | Apr 2009 | A1 |
20090172139 | Wong et al. | Jul 2009 | A1 |
20090198927 | Bondurant et al. | Aug 2009 | A1 |
20090199041 | Fukui et al. | Aug 2009 | A1 |
20090307292 | Li et al. | Dec 2009 | A1 |
20090327312 | Kakivaya et al. | Dec 2009 | A1 |
20100023941 | Iwamatsu et al. | Jan 2010 | A1 |
20100031000 | Flynn et al. | Feb 2010 | A1 |
20100036862 | Das et al. | Feb 2010 | A1 |
20100114336 | Konieczny et al. | May 2010 | A1 |
20100114905 | Slavik et al. | May 2010 | A1 |
20100122330 | McMillan et al. | May 2010 | A1 |
20100161817 | Xiao et al. | Jun 2010 | A1 |
20100172180 | Paley et al. | Jul 2010 | A1 |
20100191783 | Mason et al. | Jul 2010 | A1 |
20100217953 | Beaman et al. | Aug 2010 | A1 |
20100228798 | Kodama et al. | Sep 2010 | A1 |
20100262797 | Rosikiewicz et al. | Oct 2010 | A1 |
20100318645 | Hoole et al. | Dec 2010 | A1 |
20100332456 | Prahlad et al. | Dec 2010 | A1 |
20110026439 | Rollins | Feb 2011 | A1 |
20110029711 | Dhuse et al. | Feb 2011 | A1 |
20110034176 | Lord et al. | Feb 2011 | A1 |
20110060918 | Troncoso Pastoriza et al. | Mar 2011 | A1 |
20110106795 | Maim | May 2011 | A1 |
20110138123 | Gurajada et al. | Jun 2011 | A1 |
20110213754 | Bindal et al. | Sep 2011 | A1 |
20110231374 | Jain et al. | Sep 2011 | A1 |
20110231524 | Lin et al. | Sep 2011 | A1 |
20110264712 | Ylonen | Oct 2011 | A1 |
20110264989 | Resch et al. | Oct 2011 | A1 |
20110271007 | Wang et al. | Nov 2011 | A1 |
20120011337 | Aizman | Jan 2012 | A1 |
20120030260 | Lu et al. | Feb 2012 | A1 |
20120030408 | Flynn et al. | Feb 2012 | A1 |
20120047181 | Baudel | Feb 2012 | A1 |
20120060072 | Simitci et al. | Mar 2012 | A1 |
20120078915 | Darcy | Mar 2012 | A1 |
20120096217 | Son et al. | Apr 2012 | A1 |
20120147937 | Goss et al. | Jun 2012 | A1 |
20120173790 | Hetzler et al. | Jul 2012 | A1 |
20120179808 | Bergkvist et al. | Jul 2012 | A1 |
20120179820 | Ringdahl et al. | Jul 2012 | A1 |
20120185555 | Regni et al. | Jul 2012 | A1 |
20120210095 | Nellans et al. | Aug 2012 | A1 |
20120233251 | Holt et al. | Sep 2012 | A1 |
20120278511 | Alatorre et al. | Nov 2012 | A1 |
20120290535 | Patel et al. | Nov 2012 | A1 |
20120290629 | Beaverson et al. | Nov 2012 | A1 |
20120310892 | Dam et al. | Dec 2012 | A1 |
20120323850 | Hildebrand et al. | Dec 2012 | A1 |
20120331528 | Fu et al. | Dec 2012 | A1 |
20130013571 | Sorenson, III et al. | Jan 2013 | A1 |
20130041931 | Brand | Feb 2013 | A1 |
20130054924 | Dudgeon et al. | Feb 2013 | A1 |
20130067270 | Lee | Mar 2013 | A1 |
20130073821 | Flynn et al. | Mar 2013 | A1 |
20130086004 | Chao et al. | Apr 2013 | A1 |
20130091180 | Vicat-Blanc-Primet et al. | Apr 2013 | A1 |
20130162160 | Ganton et al. | Jun 2013 | A1 |
20130166818 | Sela | Jun 2013 | A1 |
20130185508 | Talagala et al. | Jul 2013 | A1 |
20130232313 | Patel et al. | Sep 2013 | A1 |
20130235192 | Quinn et al. | Sep 2013 | A1 |
20130246589 | Klemba et al. | Sep 2013 | A1 |
20130262638 | Kumarasamy et al. | Oct 2013 | A1 |
20130263151 | Li et al. | Oct 2013 | A1 |
20130268644 | Hardin et al. | Oct 2013 | A1 |
20130268770 | Hunt et al. | Oct 2013 | A1 |
20130282798 | McCarthy et al. | Oct 2013 | A1 |
20130288668 | Pragada et al. | Oct 2013 | A1 |
20130311574 | Lal | Nov 2013 | A1 |
20130346591 | Carroll et al. | Dec 2013 | A1 |
20130346839 | Dinha | Dec 2013 | A1 |
20140006580 | Raghu | Jan 2014 | A1 |
20140007178 | Gillum et al. | Jan 2014 | A1 |
20140059405 | Syu et al. | Feb 2014 | A1 |
20140143206 | Pittelko | May 2014 | A1 |
20140181575 | Kalach | Jun 2014 | A1 |
20140297604 | Brand | Oct 2014 | A1 |
20140317065 | Barrus | Oct 2014 | A1 |
20140335480 | Asenjo et al. | Nov 2014 | A1 |
20140351419 | Hunt et al. | Nov 2014 | A1 |
20140372490 | Barrus et al. | Dec 2014 | A1 |
20150012763 | Cohen et al. | Jan 2015 | A1 |
20150019491 | Hunt et al. | Jan 2015 | A1 |
20150066524 | Fairbrothers et al. | Mar 2015 | A1 |
20150081964 | Kihara et al. | Mar 2015 | A1 |
20150106335 | Hunt et al. | Apr 2015 | A1 |
20150106579 | Barrus | Apr 2015 | A1 |
20150172114 | Tarlano et al. | Jun 2015 | A1 |
20150220578 | Hunt et al. | Aug 2015 | A1 |
20150222616 | Tarlano et al. | Aug 2015 | A1 |
Number | Date | Country |
---|---|---|
1285354 | Feb 2003 | EP |
2575379 | Apr 2013 | EP |
2834749 | Feb 2015 | EP |
2834943 | Feb 2015 | EP |
2989549 | Mar 2016 | EP |
3000205 | Mar 2016 | EP |
3000289 | Mar 2016 | EP |
3008647 | Apr 2016 | EP |
3011428 | Apr 2016 | EP |
3019960 | May 2016 | EP |
3020259 | May 2016 | EP |
2004252663 | Sep 2004 | JP |
2008533570 | Aug 2008 | JP |
2010146067 | Jul 2010 | JP |
2011095976 | May 2011 | JP |
2012048424 | Mar 2012 | JP |
WO2013152357 | Oct 2013 | WO |
WO2013152358 | Oct 2013 | WO |
WO2014176264 | Oct 2014 | WO |
WO2014190093 | Nov 2014 | WO |
WO2014201270 | Dec 2014 | WO |
WO2014205286 | Dec 2014 | WO |
WO2015006371 | Jan 2015 | WO |
WO2015054664 | Apr 2015 | WO |
WO2015057576 | Apr 2015 | WO |
WO2015088761 | Jun 2015 | WO |
WO2015116863 | Aug 2015 | WO |
WO2015120071 | Aug 2015 | WO |
Entry |
---|
International Search Report dated Aug. 6, 2013 Application No. PCT/US2013/035675. |
Huck et al. Architectural Support for Translation Table Management in Large Address Space Machines. ISCA '93 Proceedings of the 20th Annual International Symposium on Computer Architecture, vol. 21, No. 2. May 1993. pp. 39-50. |
International Search Report dated Aug. 2, 2013 Application No. PCT/US2013/035673. |
International Search Report dated Sep. 10, 2014 Application No. PCT/US2014/035008. |
Askitis, Nikolas et al., “HAT-trie: A Cache-conscious Trie-based Data Structure for Strings”. |
International Search Report dated Sep. 24, 2014 Application No. PCT/US2014/039036. |
International Search Report dated Oct. 22, 2014 Application No. PCT/US2014/043283. |
International Search Report dated Nov. 7, 2014 Application No. PCT/US2014/042155. |
International Search Report dated Jan. 1, 2015 Application No. PCT/US2014/060176. |
International Search Report dated Feb. 24, 2015 Application No. PCT/US2014/060280. |
International Search Report dated Mar. 4, 2015 Application No. PCT/US2014/067110. |
International Search Report dated Apr. 2, 2015 Application No. PCT/US2014/045822. |
International Sesarch Report dated May 14, 2015 Application No. PCT/US2015/013611. |
International Sesarch Report dated May 15, 2015 Application No. PCT/US2015/014492. |
Invitation pursuant to Rule 63(1) dated May 19, 2015 Application No. 13772293.0. |
Extended European Search Report dated Aug. 4, 2015 Application No. 13771965.4. |
Dabek et al. “Wide-area cooperative storage with CFS”, Proceedings of the ACM Symposium on Operating Systems Principles, Oct. 1, 2001. pp. 202-215. |
Stoica et al. “Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications”, Computer Communication Review, ACM, New York, NY, US, vol. 31, No. 4, Oct. 1, 2001. pp. 149-160. |
Non-Final Office Action, Jun. 24, 2015, U.S. Appl. No. 13/441,592, filed Apr. 6, 2012. |
Non-Final Office Action, Jun. 29, 2015, U.S. Appl. No. 14/055,662, filed Oct. 16, 2013. |
Non-Final Office Action, Aug. 11, 2015, U.S. Appl. No. 14/171,651, filed Feb. 3, 2014. |
Final Office Action, Nov. 27, 2015, U.S. Appl. No. 13/441,592, filed Apr. 6, 2012. |
Advisory Action, Feb. 19, 2016, U.S. Appl. No. 13/441,592, filed Apr. 6, 2012. |
Final Office Action, Nov. 27, 2015, U.S. Appl. No. 14/171,651, filed Feb. 3, 2014. |
Final Office Action, Nov. 20, 2015, U.S. Appl. No. 14/055,662, filed Oct. 16, 2013. |
Advisory Action, Jan. 29, 2016, U.S. Appl. No. 14/055,662, filed Oct. 16, 2013. |
Office Action, Dec. 10, 2015, U.S. Appl. No. 13/939,106, filed Jul. 10, 2013. |
Non-Final Office Action, Jan. 11, 2016, U.S. Appl. No. 14/284,351, filed May 21, 2014. |
Advisory Action, Jan. 12, 2016, U.S. Appl. No. 14/171,651, filed Feb. 3, 2014. |
Office Action, Mar. 15, 2016, U.S. Appl. No. 14/171,651, filed Feb. 3, 2014. |
Office Action, Apr. 5, 2016, U.S. Appl. No. 14/257,905, filed Apr. 21, 2014. |
Office Action, Apr. 21, 2016, U.S. Appl. No. 14/105,099, filed Dec. 12, 2013. |
Extended European Search Report dated Aug. 20, 2015 Application No. 13772293.0. |
Office Action dated Mar. 15, 2016 in Japanese Patent Application No. 2015-504769 filed Apr. 8, 2013. |
Joao, Jose et al., “Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection,” Jun. 2009, ISCA '09: Proceedings of the 36th annual internaltional symposium on Computer Architecture, pp. 418-428. |
Office Action dated Mar. 29, 2016 in Japanese Patent Application No. 2015-504768 filed Apr. 8, 2013, pp. 1-16. |
Notice of Allowance dated Jul. 26, 2016 for Japanese Patent Application No. 2015-504768 filed Apr. 8, 2013, pp. 1-4. |
Number | Date | Country | |
---|---|---|---|
20140379671 A1 | Dec 2014 | US |
Number | Date | Country | |
---|---|---|---|
61837078 | Jun 2013 | US |