N/A
This invention is generally related to data storage, and more particularly to networked storage systems.
Referring to
The operations performed within the storage platform to support data maintenance, protection and IOs are hidden from other devices. From the perspective of a host device 104, such as a server, an IO appears to be a local operation because, e.g., a SCSI command that would be used to access local storage is encapsulated, sent to the storage platform, de-encapsulated, and processed. The host is unaware, for example, of where the data is stored by the storage platform, or how the data is protected. However, the handling of data by the storage platform can affect performance of the host. In the example of an Internet transaction a user 106 initiates the transaction by communicating with the host device 104. The host device operates instances of applications created to support particular types of transactions. Operation of the applications may require access to data maintained by the storage platform 100. Consequently, IO operations take place between the host device and the storage platform in support of the application which is operated to support the transaction initiated by the user. If these IO operations include retrieval of data from relatively slow storage tier then latency increases. Furthermore, some latency can be expected from the network.
In accordance with an embodiment of the present invention, an apparatus comprises: a storage grid including a channel director which retrieves data from a cache in response to a storage protocol request, and a disk director that copies requested data from data storage to the cache; at least one computation node that runs an application; and an interface that enables communication between the storage grid and the computation node for data access in support of the application.
In accordance with another embodiment of the present invention, a method comprises: in a storage platform including a computation node and a storage grid including a channel director which retrieves data from a cache in response to a storage protocol request, and a disk director that copies requested data from data storage to the cache, running an application on the computation node, maintaining data by the storage grid, and exchanging data access communications between the storage grid and the computation node for data access in support of running the application.
An advantage associated with at least one aspect of the invention is that the latency associated with IO operations performed over a network is mitigated. In a typical prior art system a host device runs an application and a separate storage platform maintains data. IO operations in support of the application are performed over a network which interconnects the host with the storage platform. Integrating computation resources with a storage array helps to avoid the latency attributable to network operations. Furthermore, the networked storage platform may even obviate the need for servers and other host devices in certain situations. Even greater performance improvement may be obtained if the computation nodes directly communicate with one or more of the cache, disk director and LUNs using a direct data placement protocol.
These and other advantages of the invention will be more apparent from the detailed description and the drawing.
Various aspects of the invention may be implemented partially or completely using computer program code. The computer program code is stored on non-transitory computer-readable memory and utilized by processing hardware. The program code may be provided as a computer program product or be integrated into network equipment.
As shown in
Referring now to
While the invention is described through the above exemplary embodiments, it will be understood by those of ordinary skill in the art that modification to and variation of the illustrated embodiments may be made without departing from the inventive concepts herein disclosed. Moreover, while the embodiments are described in connection with various illustrative structures, one skilled in the art will recognize that the system may be embodied using a variety of specific structures. Accordingly, the invention should not be viewed as limited except by the scope and spirit of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
6324623 | Carey | Nov 2001 | B1 |
6665740 | Mason, Jr. | Dec 2003 | B1 |
6721870 | Yochai | Apr 2004 | B1 |
6813676 | Henry | Nov 2004 | B1 |
6854034 | Kitamura | Feb 2005 | B1 |
6907457 | Merrell et al. | Jun 2005 | B2 |
6952737 | Coates | Oct 2005 | B1 |
7032228 | McGillis | Apr 2006 | B1 |
7181578 | Guha | Feb 2007 | B1 |
7233984 | Mohamed et al. | Jun 2007 | B2 |
7266622 | Flanigan | Sep 2007 | B2 |
7330862 | Srinivasan | Feb 2008 | B1 |
7480780 | Kitamura | Jan 2009 | B2 |
7627537 | Lai | Dec 2009 | B2 |
7660322 | Kiel | Feb 2010 | B2 |
7873619 | Faibish et al. | Jan 2011 | B1 |
8224934 | Dongre | Jul 2012 | B1 |
8443141 | Beardsley et al. | May 2013 | B2 |
8627012 | Derbeko et al. | Jan 2014 | B1 |
20040078517 | Kaneko | Apr 2004 | A1 |
20040268347 | Knauerhase | Dec 2004 | A1 |
20050251631 | Rowlands | Nov 2005 | A1 |
20060212669 | Uchida | Sep 2006 | A1 |
20060248285 | Petev | Nov 2006 | A1 |
20070192475 | Das | Aug 2007 | A1 |
20070248029 | Merkey et al. | Oct 2007 | A1 |
20080034417 | He | Feb 2008 | A1 |
20090006739 | Lubbers | Jan 2009 | A1 |
20090182953 | Merkey et al. | Jul 2009 | A1 |
20100198972 | Umbehocker | Aug 2010 | A1 |
20100199036 | Siewert | Aug 2010 | A1 |
20110320661 | Heller | Dec 2011 | A1 |
20120079187 | Beardsley et al. | Mar 2012 | A1 |
20120159067 | Kelton et al. | Jun 2012 | A1 |
Entry |
---|
Chi, C.-H.; Dietz, H., “Improving cache performance by selective cache bypass,” System Sciences, 1989. vol. I: Architecture Track, Proceedings of the Twenty-Second Annual Hawaii International Conference on , vol. 1, No., pp. 277,285 vol. 1, Jan. 3-6, 1989. |
Shah, Pinkerton, et al., “Direct data placement over reliable transports”, Oct. 2007. |
Mendel Rosenblum and Tal Garfinkel. 2005. Virtual Machine Monitors: Current Technology and Future Trends. Computer 38, 5 (May 2005), 39-47. |
Hypertransport Consortium. Hypertransport I/O Technology Comparison with Traditional and Emerging I/O Technologies. White Paper, Jun. 2004. |
Khalil Amiri, David Petrou, Gregory R. Ganger, and Garth A. Gibson. 2000. Dynamic function placement for data-intensive cluster computing. In Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC '00). USENIX Association, Berkeley, CA, USA, 25-25. |
HyperTransport Technology Consortium, HyperTransport™ I/O Link Specification Revision 3.00b, 2007 (Year: 2007). |