Not applicable.
Not applicable.
The present invention relates generally to computer networks and more particularly to dispersed storage of data.
Computing devices are known to communicate data, process data, and/or store data. Such computing devices range from wireless smart phones, laptops, tablets, personal computers (PC), work stations, and video game devices, to data centers that support millions of web searches, stock trades, or on-line purchases every day. In general, a computing device includes a central processing unit (CPU), a memory system, user input/output interfaces, peripheral device interfaces, and an interconnecting bus structure.
As is further known, a computer may effectively extend its CPU by using “cloud computing” to perform one or more computing functions (e.g., a service, an application, an algorithm, an arithmetic logic function, etc.) on behalf of the computer. Further, for large services, applications, and/or functions, cloud computing may be performed by multiple cloud computing resources in a distributed manner to improve the response time for completion of the service, application, and/or function. For example, Hadoop is an open source software framework that supports distributed applications enabling application execution by thousands of computers.
In addition to cloud computing, a computer may use “cloud storage” as part of its memory system. As is known, cloud storage enables a user, via its computer, to store files, applications, etc. on an Internet storage system. The Internet storage system may include a RAID (redundant array of independent disks) system and/or a dispersed storage system that uses an error correction scheme to encode data for storage.
Each of the computing devices (1 and 2) may be a portable computing device and/or a fixed computing device. A portable computing device may be a social networking device, a gaming device, a cell phone, a smart phone, a digital assistant, a digital music player, a digital video player, a laptop computer, a handheld computer, a tablet, a video game controller, and/or any other portable device that includes a computing core. A fixed computing device may be a computer (PC), a computer server, a cable set-top box, a satellite receiver, a television set, a printer, a fax machine, home entertainment equipment, a video game console, and/or any type of home or office computing equipment. In an embodiment, each of the computing devices (1 and 2) includes, at a minimum, a computing core as shown in
Each of the storage units (SU1-10 of each site 1 and 2) includes a computing core as shown in
In general, when a computing device 1 or 2 has data to store in storage units of the DSN 10, it divides the data (e.g., a file (e.g., text, video, audio, etc.), a data object, or other data arrangement) into a plurality of data segments. The computing device then disperse storage error encodes a data segment to produce a set of encoded data slices. As a result of encoding, the computing device produces a plurality of sets of encoded data slices, which are provided to the storage units for storage.
The dispersed storage error encoding is done in accordance with a dispersed storage error encoding process based on dispersed storage error encoding parameters. The dispersed storage error encoding parameters include an encoding function (e.g., information dispersal algorithm, Reed-Solomon, Cauchy Reed-Solomon, systematic encoding, non-systematic encoding, on-line codes, etc.), a data segmenting protocol (e.g., data segment size, fixed, variable, etc.), a total number of encoded data slices per encoding of a data segment by the encoding function (e.g., a pillar width number (T)), a decode threshold number (D) of encoded data slices of a set of encoded data slices that are needed to recover the data segment, a read threshold number ® to indicate a number of encoded data slices per set to be read from storage for decoding, and/or a write threshold number (W) to indicate a number of encoded data slices per set that must be accurately stored before the encoded data segment is deemed to have been properly stored.
As an example, the following selections are made for dispersed storage error encoding: Cauchy Reed-Solomon as the encoding function, a data segmenting protocol, a pillar width of 16, a decode threshold of 10, a read threshold of 11, and a write threshold of 13. The data is divided into a plurality of data segments in accordance with the data segmenting protocol (e.g., divide the data into AA sized data segments (e.g., Kilo-bytes to Tera-bytes or more). The number of data segments created is dependent of the size of the data and the data segmenting protocol.
To encode a data segment, the data segment is divided into Y number of data blocks and arranged into a data matrix of D rows (which corresponds to the decode threshold number) by Z columns (where Z=Y/D and Y is selected based on desired data size of the data blocks (e.g., a few bytes to Giga-bytes or more)). The data matrix is multiplied by an encoding matrix of the encoding function to produce a coded matrix of T rows (which corresponds to the total number of encoded data slices per set) by Z columns of coded values. The set of encoded data slices is produced from the coded matrix (e.g., a row corresponds to an encoded data slice). The computing device sends the set of encoded data slices to storage units for storage therein.
In an embodiment of the DSN, sites have a sharing relationship. The sharing relationship allows for a computing device to issue one set of write requests regarding a set of encoded data slices to some storage units within a sharing group of sites (e.g., two or more sites, each having a number of storage units affiliated with the sharing group of sites) and the identified storage units facilitate storage of one or more copies of set of encoded data slices to other storage units within the sharing group of sites. Further, the storage units within a site of the sharing group of sites store a decode threshold number, or more, of encoded data slices for a set of encoded data slices to expedite data access requests for the set of encoded data slices.
As an example, DSN 10 includes ten sites. Sites 1 and 2 of the DSN have a sharing relationship (e.g., form a sharing group of sites); Sites 3-6 have a second sharing relationship (e.g., form a second sharing group of sites); Sites 7-10 have a third sharing relationship (e.g., form a third sharing group of sites); and sites 1, 3, 5, 7, and 9 have a fourth sharing relationship (e.g., form a fourth sharing group of sites). Within a sharing group: each site will store a decode threshold number, or more, of encoded data slices, but have at least some different encoded data slices of a set of encoded data slices to provide redundancy for the other site(s).
In an example of operation, a computing device #1 is affiliated with site 1 via a direct connection to the LAN 14, by registering with the DSN 10 and indicating that site 1 is its home site, etc. When computing device #1 has a set of encoded data slices to write storage units of the DSN, it obtains (e.g., retrieves, creates, requests, selects based on being affiliated with the first site, etc.) a first writing pattern. The first writing pattern is one of a plurality of writing patterns (e.g., there may be a separate writing pattern for each site within the sharing group of sites, there may be two writing patterns, where one writing pattern is shared by two or more sites, etc.). The first writing pattern includes writing a write threshold number (e.g., W) of encoded data slices to storage units of site #1 and writing a remaining number (e.g., =pillar width number (T)−W) of encoded data slices to one or more storage units of another site, or sites of the sharing group of sites. Various example of the writing patterns and storing encoded data slices based thereon are described in one or more the subsequent figures.
The storage units of site 1 send copies of their respective encoded data slices to storage units of other sites in the sharing group of sites in accordance with an inter-site storage unit relationship. The storage units of the other site that received the remaining encoded data slices may also send copies of their respective encoded data slices to still other storage units of other sites in the sharing group of sites in accordance with the inter-site storage unit relationship. The inter-site storage unit relationship ensures that each site stores a desired number of encoded data slices (e.g., the write threshold number). Various example of the inter-site storage unit relationship and sending encoded data slices based thereon to other storage units are described in one or more the subsequent figures.
In one example, the sharing groups of sites are determined by a system administrator or other entity and stored such that the sharing groups may be accessed by a computing device via a look up table, a determination based on computer device affiliation with a site and/or DSN, and/or a query-response protocol. In alternative example, the sharing group is created by selecting the sites from a plurality of sites. The sharing group may be created for the computing device for this particular write operation, a series of write operations, and/or for all data accesses of the computing device; is created for a group of computing devices to which the computing device belongs; and/or is created based on a data usage threshold. The sharing group may be created by the computing device, system administrator, or other entity in the DSN.
The DSTN interface module 76 functions to mimic a conventional operating system (OS) file system interface (e.g., network file system (NFS), flash file system (FFS), disk file system (DFS), file transfer protocol (FTP), web-based distributed authoring and versioning (WebDAV), etc.) and/or a block memory interface (e.g., small computer system interface (SCSI), internet small computer system interface (iSCSI), etc.). The DSTN interface module 76 and/or the network interface module 70 may function as the interface 30 of the user device 14 of
In this example, the dispersed storage error encoding parameters include a total number (T) of twelve and a write threshold number (W) of ten.
A comparison of the two writing patterns shows that encoded data slices s9 and s10 are written to storage units 9 and 10 of site 1 regardless of whether the first computing device (e.g., affiliated with site 1) or the second computing device (e.g., affiliated with site 2) wrote the encoded data slices. Similarly, encoded data slices s11 and s12 are written to storage units 1 and 2 of site 2. Encoded data slices s1-s8 are stored in different storage units depending on whether the first or the second computing device does the writing. When the first computing device does the writing, encoded data slices s1-s8 are stored in storage units SU1-SU8 of site 1 and when the second computing device does the writing, encoded data slices s1-s8 are stored in storage units SU3-SU10 of site 2.
The inter-site storage unit relationship 54 of
As shown, storage units SU1-SU8 of site 1 have corresponding storage units SU3-SU10 of site 2 and will storage and copy encoded data slices as discussed above. Storage units SU9 and SU10 of site 1 do not have corresponding storage units in site 2 because regardless of which writing pattern is used, storage units SU9 and SU10 store encoded data slices s9 and s10, respectively. Storage units SU1 and SU2 of site 2 also do not have corresponding storage units of site 1 since they store encoded data slices s11 and s12, respectively, regardless of which writing pattern is used.
In accordance with the first writing pattern 50, the computing device #1 sends LAN write requests 56 to storage units SU1-SU10 of site 1 regarding encoded data slices s1-s10. The computing device #1 also sends WAN write requests 58 to storage units SU1 and SU2 of site 2 regarding encoded data slices s11 and s12. The storage units of the sites store their respective encoded data slices as shown in
In
In accordance with the second writing pattern 50, the computing device #2 sends LAN write requests 66 to storage units SU1-SU10 of site 2 regarding encoded data slices s1-s8, s11, and s12. The computing device #2 also sends WAN write requests 68 to storage units SU9 and SU10 of site 1 regarding encoded data slices s9 and s10. The storage units of the sites store their respective encoded data slices as shown in
In
In accordance with the third writing pattern 76, the computing device #3 sends LAN write requests 80 to storage units SU1-SU10 of site 3 regarding encoded data slices s1-s10. The computing device #3 also sends WAN write requests 82 to storage units SU1 and SU2 of site 2 regarding encoded data slices s11 and s12. The storage units of the sites store their respective encoded data slices as shown in
In
In
In
As shown, computing device 1 encodes a data segment to produce a set of twelve encoded data slices. In accordance with a first writing pattern, the computing device 1 sends LAN write requests 100 for encoded data slices s1-s10 to storage units SU1-SU5 of site 1. Each storage units stores two encoded data slices as shown. In addition, the computing device 1 sends WAN write requests 102 for encoded data slices s11 and s12 to storage unit SU1 of site 2 for storage therein.
In
As may be used herein, the terms “substantially” and “approximately” provides an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to fifty percent and corresponds to, but is not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, and/or thermal noise. Such relativity between items ranges from a difference of a few percent to magnitude differences. As may also be used herein, the term(s) “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”. As may even further be used herein, the term “operable to” or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item. As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal 1 has a greater magnitude than signal 2, a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1.
As may also be used herein, the terms “processing module”, “processing circuit”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture.
The present invention has been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.
In addition, a flow diagram may include a “start” and/or “continue” indication. The “start” and “continue” indications reflect that the steps presented can optionally be incorporated in or otherwise used in conjunction with other routines. In this context, “start” indicates the beginning of the first step presented and may be preceded by other activities not specifically shown. Further, the “continue” indication reflects that the steps presented may be performed multiple times and/or may be succeeded by other activities not specifically shown. Further, while a flow diagram indicates a particular ordering of steps, other orderings are likewise possible provided that the principles of causality are maintained.
The present invention may have also been described, at least in part, in terms of one or more embodiments. An embodiment of the present invention is used herein to illustrate the present invention, an aspect thereof, a feature thereof, a concept thereof, and/or an example thereof. A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process that embodies the present invention may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art.
The term “module” is used in the description of the various embodiments of the present invention. A module includes a processing module, a functional block, hardware, and/or software stored on memory for performing one or more functions as may be described herein. Note that, if the module is implemented via hardware, the hardware may operate independently and/or in conjunction software and/or firmware. As used herein, a module may contain one or more sub-modules, each of which may be one or more modules.
While particular combinations of various functions and features of the present invention have been expressly described herein, other combinations of these features and functions are likewise possible. The present invention is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4092732 | Ouchi | May 1978 | A |
| 5454101 | Mackay et al. | Sep 1995 | A |
| 5485474 | Rabin | Jan 1996 | A |
| 5774643 | Lubbers et al. | Jun 1998 | A |
| 5802364 | Senator et al. | Sep 1998 | A |
| 5809285 | Hilland | Sep 1998 | A |
| 5890156 | Rekieta et al. | Mar 1999 | A |
| 5987622 | Lo Verso et al. | Nov 1999 | A |
| 5991414 | Garay et al. | Nov 1999 | A |
| 6012159 | Fischer et al. | Jan 2000 | A |
| 6058454 | Gerlach et al. | May 2000 | A |
| 6128277 | Bruck et al. | Oct 2000 | A |
| 6175571 | Haddock et al. | Jan 2001 | B1 |
| 6192472 | Garay et al. | Feb 2001 | B1 |
| 6256688 | Suetaka et al. | Jul 2001 | B1 |
| 6272658 | Steele et al. | Aug 2001 | B1 |
| 6301604 | Nojima | Oct 2001 | B1 |
| 6356949 | Katsandres et al. | Mar 2002 | B1 |
| 6366995 | Vilkov et al. | Apr 2002 | B1 |
| 6374336 | Peters et al. | Apr 2002 | B1 |
| 6415373 | Peters et al. | Jul 2002 | B1 |
| 6418539 | Walker | Jul 2002 | B1 |
| 6449688 | Peters et al. | Sep 2002 | B1 |
| 6567948 | Steele et al. | May 2003 | B2 |
| 6571282 | Bowman-Amuah | May 2003 | B1 |
| 6609223 | Wolfgang | Aug 2003 | B1 |
| 6718361 | Basani et al. | Apr 2004 | B1 |
| 6760808 | Peters et al. | Jul 2004 | B2 |
| 6785768 | Peters et al. | Aug 2004 | B2 |
| 6785783 | Buckland | Aug 2004 | B2 |
| 6826711 | Moulton et al. | Nov 2004 | B2 |
| 6879596 | Dooply | Apr 2005 | B1 |
| 7003688 | Pittelkow et al. | Feb 2006 | B1 |
| 7024451 | Jorgenson | Apr 2006 | B2 |
| 7024609 | Wolfgang et al. | Apr 2006 | B2 |
| 7080101 | Watson et al. | Jul 2006 | B1 |
| 7103824 | Halford | Sep 2006 | B2 |
| 7103915 | Redlich et al. | Sep 2006 | B2 |
| 7111115 | Peters et al. | Sep 2006 | B2 |
| 7140044 | Redlich et al. | Nov 2006 | B2 |
| 7146644 | Redlich et al. | Dec 2006 | B2 |
| 7171493 | Shu et al. | Jan 2007 | B2 |
| 7222133 | Raipurkar et al. | May 2007 | B1 |
| 7240236 | Cutts et al. | Jul 2007 | B2 |
| 7272613 | Sim et al. | Sep 2007 | B2 |
| 7636724 | de la Torre et al. | Dec 2009 | B2 |
| 9009567 | Baptist | Apr 2015 | B2 |
| 10073645 | McShane | Sep 2018 | B2 |
| 10394650 | Dubucq et al. | Aug 2019 | B2 |
| 20020062422 | Butterworth et al. | May 2002 | A1 |
| 20020166079 | Ulrich et al. | Nov 2002 | A1 |
| 20030018927 | Gadir et al. | Jan 2003 | A1 |
| 20030037261 | Meffert et al. | Feb 2003 | A1 |
| 20030065617 | Watkins et al. | Apr 2003 | A1 |
| 20030084020 | Shu | May 2003 | A1 |
| 20040024963 | Talagala et al. | Feb 2004 | A1 |
| 20040122917 | Menon et al. | Jun 2004 | A1 |
| 20040215998 | Buxton et al. | Oct 2004 | A1 |
| 20040228493 | Ma | Nov 2004 | A1 |
| 20050100022 | Ramprashad | May 2005 | A1 |
| 20050114594 | Corbett et al. | May 2005 | A1 |
| 20050125593 | Karpoff et al. | Jun 2005 | A1 |
| 20050131993 | Fatula | Jun 2005 | A1 |
| 20050132070 | Redlich et al. | Jun 2005 | A1 |
| 20050144382 | Schmisseur | Jun 2005 | A1 |
| 20050229069 | Hassner et al. | Oct 2005 | A1 |
| 20060047907 | Shiga et al. | Mar 2006 | A1 |
| 20060136448 | Cialini et al. | Jun 2006 | A1 |
| 20060156059 | Kitamura | Jul 2006 | A1 |
| 20060224603 | Correll | Oct 2006 | A1 |
| 20070079081 | Gladwin et al. | Apr 2007 | A1 |
| 20070079082 | Gladwin et al. | Apr 2007 | A1 |
| 20070079083 | Gladwin et al. | Apr 2007 | A1 |
| 20070088970 | Buxton et al. | Apr 2007 | A1 |
| 20070174192 | Gladwin et al. | Jul 2007 | A1 |
| 20070214285 | Au et al. | Sep 2007 | A1 |
| 20070234110 | Soran et al. | Oct 2007 | A1 |
| 20070283167 | Venters et al. | Dec 2007 | A1 |
| 20090094251 | Gladwin et al. | Apr 2009 | A1 |
| 20090094318 | Gladwin et al. | Apr 2009 | A1 |
| 20100023524 | Gladwin et al. | Jan 2010 | A1 |
| 20120027134 | Gladwin et al. | Feb 2012 | A1 |
| 20130275833 | Thornton et al. | Oct 2013 | A1 |
| 20140068358 | Yang et al. | Mar 2014 | A1 |
| 20140122970 | Dhuse | May 2014 | A1 |
| 20140344646 | Vas | Nov 2014 | A1 |
| 20150067295 | Storm et al. | Mar 2015 | A1 |
| 20150255144 | Giovannini et al. | Sep 2015 | A1 |
| 20170272539 | Wozniak | Sep 2017 | A1 |
| Entry |
|---|
| Chung; An Automatic Data Segmentation Method for 3D Measured Data Points; National Taiwan University; pp. 1-8; 1998. |
| Harrison; Lightweight Directory Access Protocol (LDAP): Authentication Methods and Security Mechanisms; IETF Network Working Group; RFC 4513; Jun. 2006; pp. 1-32. |
| Kubiatowicz, et al.; OceanStore: An Architecture for Global-Scale Persistent Storage; Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2000); Nov. 2000; pp. 1-12. |
| Legg; Lightweight Directory Access Protocol (LDAP): Syntaxes and Matching Rules; IETF Network Working Group; RFC 4517; Jun. 2006; pp. 1-50. |
| Plank, T1: Erasure Codes for Storage Applications; FAST2005, 4th Usenix Conference on File Storage Technologies; Dec. 13-16, 2005; pp. 1-74. |
| Rabin; Efficient Dispersal of Information for Security, Load Balancing, and Fault Tolerance; Journal of the Association for Computer Machinery; vol. 36, No. 2; Apr. 1989; pp. 335-348. |
| Satran, et al.; Internet Small Computer Systems Interface (iSCSI); IETF Network Working Group; RFC 3720; Apr. 2004; pp. 1-257. |
| Sciberras; Lightweight Directory Access Protocol (LDAP): Schema for User Applications; IETF Network Working Group; RFC 4519; Jun. 2006; pp. 1-33. |
| Sermersheim; Lightweight Directory Access Protocol (LDAP): The Protocol; IETF Network Working Group; RFC 4511; Jun. 2006; pp. 1-68. |
| Shamir; How to Share a Secret; Communications of the ACM; vol. 22, No. 11; Nov. 1979; pp. 612-613. |
| Smith; Lightweight Directory Access Protocol (LDAP): String Representation of Search Filters; IETF Network Working Group; RFC 4515; Jun. 2006; pp. 1-12. |
| Smith; Lightweight Directory Access Protocol (LDAP): Uniform Resource Locator; IETF Network Working Group; RFC 4516; Jun. 2006; pp. 1-15. |
| Wildi; Java iSCSi Initiator; Master Thesis; Department of Computer and Information Science, University of Konstanz; Feb. 2007; 60 pgs. |
| Xin, et al.; Evaluation of Distributed Recovery in Large-Scale Storage Systems; 13th IEEE International Symposium on High Performance Distributed Computing; Jun. 2004; pp. 172-181. |
| Zeilenga; Lightweight Directory Access Protocol (LDAP): Technical Specification Road Map; IETF Network Working Group; RFC 4510; Jun. 2006; pp. 1-8. |
| Zeilenga; Lightweight Directory Access Protocol (LDAP): Directory Information Models; IETF Network Working Group; RFC 4512; Jun. 2006; pp. 1-49. |
| Zeilenga; Lightweight Directory Access Protocol (LDAP): Internationalized String Preparation; IETF Network Working Group; RFC 4518; Jun. 2006; pp. 1-14. |
| Zeilenga; Lightweight Directory Access Protocol (LDAP): String Representation of Distinguished Names; IETF Network Working Group; RFC 4514; Jun. 2006; pp. 1-15. |
| List of IBM Patents or Patent Applications Treated as Related, dated Oct. 23, 2020, 1 page. |
| Number | Date | Country | |
|---|---|---|---|
| 20190332475 A1 | Oct 2019 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 15173201 | Jun 2016 | US |
| Child | 16508572 | US |