Large-scale network-based services often require large-scale data storage. For example, Internet email services store large quantities of user inboxes, each user inbox itself including a sizable quantity of data. This large-scale data storage is often implemented in datacenters comprised of storage and computation devices. The storage devices are typically arranged in a cluster and include redundant copies. This redundancy is often achieved through use of a redundant array of inexpensive disks (RAID) configuration and helps minimize the risk of data loss. The computation devices are likewise typically arranged in a cluster.
Both sets of clusters often suffer a number of bandwidth bottlenecks that reduce datacenter efficiency. For instance, a number of storage devices or computation devices can be linked to a single network switch. Network switches are traditionally arranged in a hierarchy, with so-called “core switches” at the top, fed by “top of rack” switches, which are in turn attached to individual computation devices. The “Top of rack” switches are typically provisioned with far more collective bandwidth to the devices below them in the hierarchy than to the core switches above them. This causes congestion and inefficient datacenter performance. The same is true within a storage device or computation device: a storage device is provisioned with disks having a collective bandwidth that is greater than a collective bandwidth of the network interface component(s) connecting them to the network. Likewise, computations devices are provisioned with an input/output bus having a bandwidth that is greater than the collective network interface bandwidth. In both cases, the scarcity of network bandwidth causes congestion and inefficiency.
To resolve these inefficiencies and bottlenecks, many datacenter applications are implemented according to the “Map-Reduce” model. In the Map-Reduce model, computation and storage devices are integrated such that the program reading and writing data is located on the same device as the data storage. The MapReduce model introduces new problems for programmers and operators, constraining how data is placed, stored, and moved to achieve adequate efficiency over the bandwidth-congested components. Often, this may require fragmenting a program into a series of smaller routines to run on separate systems.
Systems described herein include storage and computation nodes with bandwidth proportioned according to the capabilities of each node. Each node is provisioned with one or more network interface components having a collective bandwidth proportioned to a bandwidth of node components, such as storage unit bandwidth or input/output bus bandwidth. By provisioning network interface components based on a proportioning of bandwidth, each node is enabled to communicate to and from other nodes at the bandwidth of node components. For example, a computation node is provisioned with network interface components with a bandwidth sufficient enough to allow the computation node to communicate at the bandwidth of its input/output bus. Likewise, a storage node is provisioned with network interface components with a bandwidth sufficient enough to allow the storage node to communicate at the bandwidth of its storage units. In one implementation, the collective bandwidth of node components is matched to or within a predefined tolerance of the collective bandwidth of network interface components of the node. By proportioning bandwidth in this manner, the computation nodes of the system are able to access data stored on the storage nodes with performance substantially equivalent (i.e., matching or within a predefined tolerance) to accesses of data stored in local storage of the computation nodes.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The detailed description is set forth with reference to the accompanying figures, in which the left-most digit of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items or features.
a-1d illustrate block diagrams showing example configurations of storage and computation nodes, in accordance with various embodiments.
a-2b illustrate block diagrams showing example system architectures, in accordance with various embodiments.
Described herein are storage nodes and computation nodes, as well as systems including at least one of each. Such systems can be used in datacenters for applications with large data storage requirements and/or bandwidth requirements for input/output operations. For example, the system described herein could be an Internet email service. The storage nodes store inboxes and other data associated with user email accounts, and the computation nodes read to and write from the stored inboxes. To avoid bottlenecks when transmitting requests and data between the nodes, each storage and computation node is provisioned with one or more network interface components having a collective bandwidth that is proportioned to bandwidth of other node components. As used herein, “proportioned” means that the bandwidths match or are within a predefined tolerance of one another (e.g., within ninety-five percent, ninety percent, eighty percent, seventy percent, etc.). Thus, in each storage node, the collective bandwidth of network interface components and the collective bandwidth of one or more storage units of the storage node are proportioned to one another. And in each computation node, the collective bandwidths of the network interface components and the bandwidth of the input/output (I/O) bus of the computation node are proportioned to one another.
By proportioning network interface component bandwidth to node component bandwidth, the system ensures that network interface component bandwidth does not cause any transmission delays and that data and requests are communicated to and from the nodes at the full bandwidth of the other node components. Returning to the example email service, this means that inboxes and other data stored in storage units of storage nodes can be written to and read from at the full bandwidth of the storage units. The result is an email service distributed among many devices with storage and computation remote from one another that performs as well as if each computation node were only needed to perform read and write operations to its own local storage.
Example Node Configurations
a-1d illustrate block diagrams showing example configurations of nodes, in accordance with various embodiments. As illustrated, a storage node 102 and a computation node 104 are connected to one another via a switch 106. While only one storage node 102, one computation node 104, and one switch 106 are shown in
Each storage node 102 includes one or more storage units 108 and one or more network interface components 110, as well as a processor 112 for processing read and write requests for the storage units 108 that are received via the network interface components 110. Each computation node includes an I/O bus 114 and one or more network interface components 116, as well as a processor 118 and logic 120. The logic 120 sends read and write requests for the storage node 102 via the processor 118 and I/O bus 114 to the network interface components 116 for transmission to the storage node 102.
The bandwidth of the storage units 108 and network interface components 110 are proportioned to one another, and the bandwidth of the I/O bus 114 and network interface components 116 are proportioned to one another.
b shows a storage node 102 with one storage unit 108 having a proportioned bandwidth 126a and multiple network interface components 110 having a collective proportioned bandwidth 126b, the proportioned bandwidths 126a and 126b being proportioned to one another.
c shows a storage node 102 with multiple storage units 108 having a collective proportioned bandwidth 130a and multiple network interface components 110 having a collective proportioned bandwidth 130b, the proportioned bandwidths 130a and 130b being proportioned to one another.
d shows a storage node 102 with multiple storage units 108 having a collective proportioned bandwidth 132a and one network interface component 110 having a proportioned bandwidth 132b, the proportioned bandwidths 132a and 132b being proportioned to one another.
In various embodiments, the storage node 102 is any computing device, such as a personal computer (PC), a laptop computer, a workstation, a server system, a mainframe, or any other computing device. In one embodiment, the storage node 102 is a virtual machine located on a computing device with other nodes or systems. The storage node 102 is a special purpose machine configured to store data and to receive and process requests for the data. To achieve the special purpose, the storage node 102 may be configured with relatively few components, such as the storage units 108, network interface components 110, and processor 112. In some embodiments, however, the storage node 102 may also include additional components, such as the additional components illustrated in
The storage units 108 are any storage components and may include at least one of a disk drive, a permanent storage drive, random access memory, an electrically erasable programmable read-only memory, a Flash Memory, a miniature hard drive, a memory card, a compact disc (CD), a digital versatile disk (DVD) an optical storage drive, a magnetic cassette, a magnetic tape, or a magnetic disk storage. The memory of each storage unit 108 may store “tracts” of data, which have a predetermined same size, such as one megabyte, and represent the smallest unit of data that can be read from or written to a storage unit without giving up performance due to the lost opportunity of reading more data “for free” after a seek. The memory of each storage unit 108 may also include a table storing identifiers of the tracts stored on that storage unit 108 and locations where the tracts are stored. The storage and use of tracts is illustrated in
In various embodiments, the network interface components 110 are any sort of network interface components and may include at least one of a network interface card, a device for communicating information to another computer, a modem, or an optical interface. Each network interface component 110 is capable of enabling a connection with a switch 106 to transmit data to and from the storage node 102.
As illustrated in
In
In
In
In various embodiments, as mentioned above, the storage node 102 includes a processor 112 in addition to the storage units 108 and network interface components 110. The processor 112 may be any sort of processor, such as one of the processors manufactured by Intel®, Applied Micro Devices (AMD®), or Motorola®. The processor 112 also includes memory, such as cache memory, utilized in processing the requests and responses of the storage node 102. Because the requests and responses are often small in size relative to the speed and capabilities of the processor 112, they do not pose the sort of bottleneck that bandwidth often does.
In addition, storage node 102 may comprise some sort of logic or embedded circuit for handling received requests and providing responses. Such logic could include memory management processes, threads, or routines executed by the processor 112.
In various embodiments, the computation node 104 shown in
The I/O bus 114 is any sort of I/O bus connecting components of the computation node 104 such as the network interface components 116, the processor 118, and memory, such as system memory or permanent storage storing the logic 120. The I/O bus 114 has a transmission bandwidth, shown as proportioned bandwidth 124a in
In various embodiments, the network interface components 116 are any sort of network interface components and may include at least one of a network interface card, a modem, or an optical interface. Each network interface component 116 is capable of enabling a connection with a switch 106 to transmit requests and responses to and from the computation node 104.
As illustrated in
In
In various embodiments, as mentioned above, the computation node 104 includes a processor 118 in addition to the I/O bus 114 and network interface components 116. The processor 118 may be any sort of processor, such as one of the processors manufactured by Intel®, Applied Micro Devices (AMD®), or Motorola®. The processor 118 also includes memory, such as cache memory, utilized in forming and sending the requests and in processing the responses received by the computation node 104. Because the requests and responses are often small in size relative to the speed and capabilities of the processor 118, they do not pose the sort of bottleneck that bandwidth often does.
Also, as shown in
In various embodiments, the storage node 102 and the computation node 104 are connected by one or more switches 106. The switches 106 may be any sort of switches. The switches 106 also each include network interface components, such as incoming and outgoing network interface components, each network interface component having a bandwidth. For example, a switch 106 may have a number of incoming Ethernet ports and an incoming wireless port, as well as outgoing Ethernet and wireless ports. In some embodiments, the incoming bandwidth of a switch 106 is proportioned to the outgoing bandwidth of the switch 106. For instance, the collective incoming bandwidth of the network interfaces that serve devices (“below” the switch in the network hierarchy) may be ten gigabits per second, and the collective bandwidth of the network interface components up to core switches may also be ten gigabits per second. By proportioning the incoming and outgoing bandwidths of the switch 106, the system avoids introduction of bottlenecks associated with the switch 106. Such switches with proportioned bandwidths are described in further detail in U.S. patent application Ser. No. 12/410,697, which is entitled “Data Center Without Structural Bottlenecks” and was filed on Mar. 25, 2009, in U.S. patent application Ser. No. 12/410,745, which is entitled “Data Center Interconnect and Traffic Engineering” and was filed on Mar. 25, 2009, and in U.S. patent application Ser. No. 12/578,608, which is entitled “Agile Data Center Network Architecture” and was filed on Oct. 14, 2009.
In some embodiments, the storage node 102 and computation node 104 may be connected by multiple switches 106, the multiple switches 106 connected to each other. Such embodiments are illustrated in
The result of provisioning storage nodes 102 and computation nodes 104 with proportioned bandwidth as shown in
In one embodiment, the storage node 102 and computation node 104 are each provisioned with network interface components 110/116 having greater collective bandwidth than the other node components. By provisioning greater network interface component bandwidth, the storage node 102 and computation node 104 are enabled to operate at the full bandwidths of the other node components and still offer additional network interface component bandwidth for use in sending and receiving data.
Example System Architectures
a-2b illustrate block diagrams showing example system architectures, in accordance with various embodiments. As illustrated in
In
Example Software Architecture
As is also shown, each computation node 104 includes a client 306, the clients 306 formulating and transmitting read and write requests 308 to the servers 302 and receiving and processing responses 310. In some embodiments, the write request 308 is one of an atomic append or a random write. The choice of whether to perform the write request 308 as an atomic append or a random write is determined by whether byte sequence being written to has been opened in an atomic append mode or a random write mode. The byte sequence may be opened by client 306 on its one or by a group of clients 306 in coordination with one another.
In some embodiments, the clients 306 identify which servers 302 to provide the requests 308 to based on a table 312. The table 312 may include mappings between tracts 304 or groups of tracts 304 and servers 302, and may ensure that the tracts comprising a byte sequence are uniformly distributed across a plurality of servers 302. The servers 302 may likewise utilize the table 312 to determine which tracts 304 that they should store.
In various embodiments, the table 312 is provided to the clients 306 and servers 302 by a metadata server 314. The metadata server 314 may be implemented on an independent node that is neither a storage node 102 nor a computation node 104, or may be implemented on one of the storage nodes 102 or the computation nodes 104. In some embodiments, the metadata server 314 generates the table 312 in response to the addition or failure of a storage unit 108.
In an example implementation, a client 306 receives a request associated with a byte sequence comprised of multiple tracts 304. The client 306 then utilizes the table 312 to identify the multiple servers 302 storing the multiple tracts 304 of the byte sequence. Next, the client 306 formulates and sends requests 308 to the servers 302. Because the bandwidth of the network interface components 116 of the computation node 104 including the client 306 has been proportioned to the bandwidth of the I/O bus 114, the requests 308 are transmitted without encountering any bottlenecks at the network interface components 116 of the computation node 104. The servers 302 then receive and process the requests 308 and formulate and send responses 310 to the requests 308. Because the bandwidth of the network interface components 110 of the storage nodes 102 including the servers 302 has been proportioned to the bandwidth of the storage units 108, the requests 308 and responses 310 are processed without any bottlenecks being introduced by the network interface components 110 of the storage nodes 102.
Example Computer System
Computer system 400 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in
In various embodiment, any or all of system memory 404, removable storage 412, and non-removable storage 414, may store programming instructions which, when executed, implement some or all of the above-described operations of the storage node 102 or computation node 104. When the computer system 400 is a computation node 104, the programming instructions may include the logic 120.
Computer system 400 may also have input device(s) 416 such as a keyboard, a mouse, a touch-sensitive display, voice input device, etc. Output device(s) 418 such as a display, speakers, a printer, etc. may also be included. These devices are well known in the art and need not be discussed at length here.
Computer system 400 may also contain communication connections 420 that allow the device to communicate with other computing devices 422. The communication connections 420 are implemented at least partially by network interface components, such as the network interface components 110 and 116 shown in
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.
This U.S. patent application is a continuation-in-part patent application of co-pending prior application Ser. No. 12/763,107, entitled “Locator Table and Client Library for Datacenters,” and of co-pending prior application Ser. No. 12/763,133, entitled “Memory Management and Recovery for Datacenters.” Both co-pending prior applications were filed on Apr. 19, 2010. U.S. application Ser. Nos. 12/763,107 and 12/763,133 are hereby incorporated by reference in their entirety herein.
Number | Name | Date | Kind |
---|---|---|---|
4491945 | Turner | Jan 1985 | A |
4780870 | McHarg et al. | Oct 1988 | A |
5305320 | Andrews et al. | Apr 1994 | A |
5423046 | Nunnelley et al. | Jun 1995 | A |
5553285 | Krakauer et al. | Sep 1996 | A |
5663951 | Danneels et al. | Sep 1997 | A |
5914878 | Yamamoto et al. | Jun 1999 | A |
5938732 | Lim et al. | Aug 1999 | A |
6424979 | Livingston et al. | Jul 2002 | B1 |
6577613 | Ramanathan | Jun 2003 | B1 |
6850489 | Omi et al. | Feb 2005 | B1 |
6871295 | Ulrich et al. | Mar 2005 | B2 |
7076555 | Orman et al. | Jul 2006 | B1 |
7115919 | Kodama | Oct 2006 | B2 |
7180875 | Neumiller et al. | Feb 2007 | B1 |
7184958 | Kagoshima et al. | Feb 2007 | B2 |
7231475 | Singla et al. | Jun 2007 | B1 |
7240358 | Horn et al. | Jul 2007 | B2 |
7342876 | Bellur et al. | Mar 2008 | B2 |
7383288 | Miloushev et al. | Jun 2008 | B2 |
7433332 | Golden et al. | Oct 2008 | B2 |
7437407 | Vahalia et al. | Oct 2008 | B2 |
7577817 | Karpoff et al. | Aug 2009 | B2 |
7610348 | Kisley et al. | Oct 2009 | B2 |
7657581 | Orenstein et al. | Feb 2010 | B2 |
7725437 | Kirshenbaum et al. | May 2010 | B2 |
7756826 | Bots et al. | Jul 2010 | B2 |
7769843 | Neuse et al. | Aug 2010 | B2 |
7774469 | Massa et al. | Aug 2010 | B2 |
7801994 | Kudo | Sep 2010 | B2 |
7805580 | Hirzel et al. | Sep 2010 | B2 |
8010829 | Chatterjee et al. | Aug 2011 | B1 |
8074107 | Sivasubramanian et al. | Dec 2011 | B2 |
8160063 | Maltz et al. | Apr 2012 | B2 |
8181061 | Nightingale et al. | May 2012 | B2 |
8234518 | Hansen | Jul 2012 | B2 |
8261033 | Slik et al. | Sep 2012 | B1 |
20020152293 | Hahn et al. | Oct 2002 | A1 |
20040153479 | Mikesell et al. | Aug 2004 | A1 |
20050075911 | Craven | Apr 2005 | A1 |
20050078655 | Tiller et al. | Apr 2005 | A1 |
20050094640 | Howe | May 2005 | A1 |
20050262097 | Sim-Tang et al. | Nov 2005 | A1 |
20060004759 | Borthakur et al. | Jan 2006 | A1 |
20060015495 | Keating et al. | Jan 2006 | A1 |
20060074946 | Pham | Apr 2006 | A1 |
20060098572 | Zhang et al. | May 2006 | A1 |
20060129614 | Kim et al. | Jun 2006 | A1 |
20060280168 | Ozaki | Dec 2006 | A1 |
20070025381 | Feng et al. | Feb 2007 | A1 |
20070156842 | Vermeulen et al. | Jul 2007 | A1 |
20080005275 | Overton et al. | Jan 2008 | A1 |
20080010400 | Moon | Jan 2008 | A1 |
20080098392 | Wipfel et al. | Apr 2008 | A1 |
20090006888 | Bernhard et al. | Jan 2009 | A1 |
20090106269 | Zuckerman et al. | Apr 2009 | A1 |
20090112921 | Oliveira et al. | Apr 2009 | A1 |
20090113323 | Zhao et al. | Apr 2009 | A1 |
20090183002 | Rohrer et al. | Jul 2009 | A1 |
20090204405 | Kato et al. | Aug 2009 | A1 |
20090259665 | Howe et al. | Oct 2009 | A1 |
20090265218 | Amini et al. | Oct 2009 | A1 |
20090268611 | Persson et al. | Oct 2009 | A1 |
20090300407 | Kamath et al. | Dec 2009 | A1 |
20090307329 | Olston et al. | Dec 2009 | A1 |
20100008230 | Khandekar et al. | Jan 2010 | A1 |
20100008347 | Qin et al. | Jan 2010 | A1 |
20100094955 | Zuckerman et al. | Apr 2010 | A1 |
20100094956 | Zuckerman et al. | Apr 2010 | A1 |
20100161657 | Cha et al. | Jun 2010 | A1 |
20100198888 | Blomstedt et al. | Aug 2010 | A1 |
20100198972 | Umbehocker | Aug 2010 | A1 |
20100250746 | Murase | Sep 2010 | A1 |
20100332818 | Prahlad et al. | Dec 2010 | A1 |
20110022574 | Hansen | Jan 2011 | A1 |
20110153835 | Rimac et al. | Jun 2011 | A1 |
20110246471 | Rakib | Oct 2011 | A1 |
20110246735 | Bryant et al. | Oct 2011 | A1 |
20110258290 | Nightingale et al. | Oct 2011 | A1 |
20110258297 | Nightingale et al. | Oct 2011 | A1 |
20110258482 | Nightingale et al. | Oct 2011 | A1 |
20110258488 | Nightingale et al. | Oct 2011 | A1 |
20110296025 | Lieblich et al. | Dec 2011 | A1 |
20110307886 | Thanga et al. | Dec 2011 | A1 |
20120041976 | Annapragada | Feb 2012 | A1 |
20120042162 | Anglin et al. | Feb 2012 | A1 |
20120047239 | Donahue et al. | Feb 2012 | A1 |
20120054556 | Grube et al. | Mar 2012 | A1 |
Number | Date | Country |
---|---|---|
WO2010108368 | Sep 2010 | WO |
Entry |
---|
Akturk, “Asynchronous Replication of Metadata Across Multi-Master Servers in Distributed Data Storage Systems”, A Thesis Submitted to Louisiana State University and Agricultural and Mechanical College, Dec. 2009, 70 pages. |
Bafna et al, “CHIRAYU: A Highly Available Metadata Server for Object Based Storage Cluster File System,” retrieved from <<http://abhinaykampasi.tripod.com/TechDocs/ChirayuPaper.pdf>>, IEEE Bombay Section, Year 2003 Prof K Shankar Student Paper & Project Contest, Apr. 2003, 6 pgs. |
Buddhikot et al, “Design of a Large Scale Multimedia Storage Server,” Journal Computer Networks and ISDN Systems, vol. 27, Issue 3, Dec. 1994, pp. 1-18. |
Chen et al, “Replication-Based Highly Available Metadata Management for Cluster File Systems,” 2010 IEEE International Conference on Cluster Computing, Sep. 2010, pp. 292-301. |
Fan et al, “A Failure Recovery Mechanism for Distributed Metadata Servers in DCFS2,” Seventh International Conference on High Performance Computing and Grid in Asia Pacific Region, Jul. 20-22, 2004, 7 pgs. |
Fu, et al., “A Novel Dynamic Metadata Management Scheme for Large Distributed Storage Systems”, Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications, Sep. 2008, pp. 987-992. |
Fullmer et al, “Solutions to Hidden Terminal Problems in Wireless Networks,” Proceedings of the ACM SIGCOMM '97 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, Cannes, France, Oct. 1997, pp. 39-49. |
Lang, “Parallel Virtual File System, Version 2”, retrieved on Nov. 12, 2010 from <<http://www.pvfs.org/cvs/pvfs-2-7-branch.build/doc/pvfs2-guide/pvfs2-guide.php>>, Sep. 2003, 39 pages. |
Sinnamohideen et al, “A Transparently-Scalable Metadata Service for the Ursa Minor Storage System,” USENIXATC'10 Proceedings of the 2010 USENIX Conference, Jun. 2010, 14 pgs. |
Weil et al, “CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data,” Proceedings of SC '06, Nov. 2006, 12 pgs. |
Weiser, “Some Computer Science Issues in Ubiquitous Computing,” retrieved at <<https://www.cs.ucsb.edu/˜ravenben/papers/coreos/Wei93.pdf>>, Mar. 1993, 14 pgs. |
U.S. Appl. No. 12/410,697, “Data Center Without Structural Bottlenecks,” Maltz et al, filed Mar. 25, 2009. |
U.S. Appl. No. 12/410,745, “Data Center Interconnect and Traffic Engineering,” Maltz et al, filed Mar. 25, 2009. |
U.S. Appl. No. 12/578,608, “Agile Data Center Network Architecture,” Greenberg et al, filed Oct. 14, 2009. |
“Citrix Storage Delivery Services Adapter for NetApp Data ONTAP”, retrieved on Mar. 9, 2010 at <<http://citrix.com/site/resources/dynamic/partnerDocs/datasheet—adapter.pdf>>, Citrix Systems, Citrix Storage Delivery Services Data sheet, 2008, 2 pgs. |
“EMC RecoverPoint Family: Cost-effective local and remote data protection and disaster recovery solution”, retrieved on Mar. 9, 2010 at <<http://www.emc.com/collateral/software/data-sheet/h2769-emc-recoverpoint-family.pdf>>, EMC Corporation, Data Sheet H2769.8, 2010, 3 pgs. |
Mohamed et al, “Extensible Communication Architecture for Grid Nodes,” abstract retrieved on Apr. 23, 2010 at <<http://www.computer.org/portal/web/csdl/doi/10.1109/itcc.2004.1286587>>, International Conference on Information Technology: Coding and Computing (ITCC'04), vol. 2, Apr. 5-7, 2004, Las Vegas, NV, 1 pg. |
Office Action for U.S. Appl. No. 13/412,944, mailed on Oct. 11, 2012, Nightingale et al., “Reading and Writing During Cluster Growth Phase”, 10 pages. |
Office Action for U.S. Appl. No. 12/763,107, mailed on Jul. 20, 2012, Nightingale et al., “Locator Table and Client Library for Datacenters”, 11 pages. |
PCT Search Report and Written Opinion mailed Oct. 23, 2012 for PCT Application No. PCT/US2012/035700, 10 pages. |
Isard, et al., “Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks”, In Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems, Mar. 21, 2007, 14 pages. |
Kennedy, “Is Parallel Computing Dead”, retrieved on Oct. 2, 2012, at http://www.crpc.rice.edu/newsletters/oct94/director.html., Parallel Computing Newsletter, vol. 2, Issue 4, Oct. 1994, 2 pages. |
Office Action for U.S. Appl. No. 13/017,193, mailed on Dec. 3, 2012, Nightingale et al., “Parallel Serialization of Request Processing”, 19 pages. |
Office Action for U.S. Appl. No. 13/112,978, mailed on Dec. 14, 2012, Elson et al., “Data Layout for Recovery and Durability”, 13 pages. |
Office Action for U.S. Appl. No. 13/116,270, mailed on Feb. 15, 2013, Nightingale et al., “Server Failure Recovery”, 16 pages. |
Rhea et al., “Maintenance-Free Global Data Storage”, IEEE Internet Computing, Sep.-Oct. 2001, pp. 40-49. |
Number | Date | Country | |
---|---|---|---|
20110258290 A1 | Oct 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12763107 | Apr 2010 | US |
Child | 12766726 | US | |
Parent | 12763133 | Apr 2010 | US |
Child | 12763107 | US |