The present disclosure relates to wide area network file systems and file caching over distributed networks.
While workers can easily share gigabytes of project data on a local-area network (LAN) using standard file-server technology, such is not the case with workers in remote offices connected over wide-area networks (WANs). With respect to file sharing over WANs, standard file server protocols provide unacceptably slow response times when opening and writing files.
All major file-sharing protocols were designed for LAN environments where clients and servers are located in the same building or campus, including: NFS (Network File System, used for Unix/Linux environments), CIFS (Common Internet File System used for Windows environments), and IPX/SPX (Internetwork Packet Exchange/Sequenced Packet Exchange, used for Novell environments). The assumption that the client and the server would be in close proximity led to a number of design decisions that do not scale across WANs. For example, these file sharing protocols tend to be rather “chatty”, insofar as they send many remote procedure calls (RPCs) across the network to perform operations.
For certain operations on a file system using the NFS protocol (such as an rsync of a source code tree), almost 80% of the RPCs sent across the network can be access RPCs, while the actual read and write RPCs typically comprise only 8-10% of the RPCs. Thus 80% of the work done by the protocol is simply spent trying to determine if the NFS client has the proper permissions to access a particular file on the NFS server, rather than actually moving data. In a LAN environment, these RPCs do not degrade performance significantly given the usual abundance of bandwidth, but they do in WANs, because of their high latency. Furthermore, because data movement RPCs make up such a small percentage of the communications, increasing network bandwidth will not help to alleviate the performance problem in WANs.
Therefore, systems have been developed (called wide area file services (WAFS)) which combine distributed file systems with caching technology to allow real-time, read-write access to shared file storage from any location, including locations connected across WANs, while also providing interoperability with standard file sharing protocols such as NFS and CIFS.
WAFS systems typically include edge file gateway (EFG) appliances (or servers), which are placed at multiple remote offices, and one or more file server appliances, at a central office or remote data center relative to the EFG appliance, that allow storage resources to be accessed by the EFG appliances. Each EFG appliance appears as a local fileserver to office users at the respective remote offices. Together, the EFG appliances and file server appliance implement a distributed file system and communicate using a WAN-optimized protocol. This protocol is translated back and forth to NFS and CIFS at either end, to communicate with the user applications and the remote storage.
The WAN-optimized protocol typically may include file-aware differencing technology, data compression, streaming, and other technologies designed to enhance performance and efficiency in moving data across the WAN. File-aware differencing technology detects which parts of a file have changed and only moves those parts across the WAN. Furthermore, if pieces of a file have been rearranged, only offset information will be sent, rather than the data itself.
In WAFS systems, performance during “read” operations is usually governed by the ability of the EFG appliance to cache files and the ability to serve cached data to users while minimizing the overhead of expensive kernel-user communication and context switches, in effect enabling the cache to act just like a high-performance file server. Typically, the cache attempts to mirror the remote data center, so that “read” requests will be satisfied from the local cache with only a few WAN round trips required to check credentials and availability of file updates.
In WAFS systems, “write” operations should maintain data coherency, i.e., file updates (“writes”) from any one office should not to conflict with updates from another office. To achieve data coherency, some WAFS systems use file leases. Leases define particular access privileges to a file from a remote office. If a user at an office wants to write to a cached file, the EFG appliance at that office obtains a “write lease”, i.e., a right to modify the document before it can do so. The WAFS system ensures that at any time there will be only one EFG appliance that has the write lease on a particular file. Also, when a user at another office tries to open the file, the EFG appliance that has the write lease flushes its data first and optionally can give up the write lease if there are no active writers to the file. In some WAFS systems, a streaming transfer is initiated when a cold or stale file is opened for reads or writes. While the file is being fetched, read requests received at an edge cache are served by passing through the request to the file server. The edge cache, however, blocks write requests until the portion of the file being written has been fetched. Applications that write in a non-sequential manner may face timeout errors due to this blocking.
In particular embodiments, the present invention provides methods, apparatuses, and systems directed to write command processing in distributed file caching systems. Implementations of the invention allow for write operations to identified files to proceed, while information regarding the identified file is fetched from a remote host and a locally cached version of the file is constructed. Implementations of the present invention can be configured to improve the performance of wide area network file systems, while preserving file consistency.
Example embodiments are illustrated in referenced figures of the drawings. It is intended that the embodiments and figures disclosed herein are to be considered illustrative rather than limiting.
The following example embodiments are described and illustrated in conjunction with apparatuses, methods, and systems which are meant to be examples and illustrative, not limiting in scope.
As will be apparent from the description below, embodiments of the present invention allow for write command processing with reduced latency and improved performance in a distributed file caching system, such as a wide area network file system.
A. Network Environment
As discussed in the background above, WAFS systems often include one or more EFG appliances 102 (or servers) and one or more remote file server appliances 36 (or servers), typically at a different location, that allow storage resources to be accessed by the EFG appliances 102 on behalf of workstations 42A.
In the embodiment of
Storage caching protocol system 12 in the illustrative network 10 shown in
A communications gateway 26A, 26B, 26C couples the Ethernet 24 of each of the systems 16 to a communications network 28. The network 28, for example, can be a WAN, LAN, the Internet or any like means for providing data communications links between geographically disparate locations. The gateway 26, for example, may implement a VPN Internet connection with remote gateways. The gateway 26 enables data, such as data files accessible in accordance with a distributed file system such as NFS or CIFS, to be transferred between a workstation and a remotely located file server. Furthermore, the functions of gateway 26 may be physically hosted on the same computing device as the storage cache and cache servers.
Referring again to
The cache manager 50 controls routing of data files, file update data, and data file leasing information to and from the cache server 36. The translator 52 stores copies of accessed data files at the storage 56 as a cached data file, makes the cached data file available for reading or writing purposes to an associated workstation that requested access to a data file corresponding to the cached data file, and updates the cached data file based on data file modifications entered by the workstation or update data supplied from the cache server. In addition, the translator 52 can generate a checksum representative of a first data file and determine the difference between another data file and the first data file based on the checksum using techniques that are well known. The leasing module 54, through interactions with the cache server 36, determines whether to grant a request for access to a data file from an associated workstation, where the access request requires that the cached data file is made available to the associated workstation either for read or write purposes. Typically, a storage cache is associated with every remote computer system that can access a data file stored at a file server of a data center system over the network 28.
Referring to
The translator 62, like the translator 52, can generate a checksum representative of a first data file and determine the difference between another data file and the first data file using the checksum. In addition, the leasing module 64, through interactions with the storage caches included in the system 12, determines whether a request for access to a data file from a workstation associated with a specific storage cache should be granted or denied.
It is to be understood that each of the modules of each of the storage caches 30 and the cache server 36, which perform data processing operations, constitutes a software module or, alternatively, a hardware module or a combined hardware/software module. In addition, each of the modules suitably contains a memory storage area, such as RAM, for storage of data and instructions for performing processing operations in accordance with the present invention. Alternatively, instructions for performing processing operations can be stored in hardware in one or more of the modules. Further, it is to be understood that, in some embodiments, the modules within each of the cache server 36 and the storage caches 30 can be combined, as suitable, into composite modules, and that the cache server and storage caches can be combined into a single appliance which can provide both caching for a workstation and real time updating of the data files stored at a file server of a central data center computer system.
The storage caches and the cache server, of the storage caching system 12 provide that a data file stored in a file server of a data center, and available for distribution to authorized workstations via a distributed file system, can be accessed for read or write purposes by the workstations, that the workstations experience reduced latency when accessing the file, and that the cached data file supplied to a workstation in response to an access request corresponds to a real time version of the data file. A storage cache of the system 12 stores in the storage 56 only a current version of the cached data file corresponding to the data file that was the subject of an access request, where the single cached data file incorporates all of the data file modifications entered by a workstation associated with the storage cache while the file was accessed by the workstation.
In a connected mode, file update data associated with the cached data file is automatically, and preferably at predetermined intervals, generated and then transmitted (flushed) to the cache server. Most preferably, the file update data is flushed with sufficient frequency to provide that a real time, updated version of the data file is stored at the file server and can be used by the cache server to respond to an access request from another storage cache or a workstation not associated with a storage cache. In some implementations, the local storage 56 of the storage cache includes only cached data files corresponding to recently accessed data files.
B. System Architecture for EFG Appliance (or Server) and CS (Remote) Appliance (or Server)
In one embodiment, hardware system 200 comprises a processor 202, a cache memory 204, and one or more software applications and drivers directed to the functions described herein. Additionally, hardware system 200 includes a high performance input/output (I/O) bus 206 and a standard I/O bus 208. A host bridge 210 couples processor 202 to high performance I/O bus 206, whereas I/O bus bridge 212 couples the two buses 206 and 208 to each other. A system memory 214 and one or more network/communication interfaces 216 couple to bus 206. Hardware system 200 may further include video memory (not shown) and a display device coupled to the video memory. Mass storage 218 and I/O ports 220 couple to bus 208. In some, but not all, embodiments, hardware system 200 may also include a keyboard and pointing device 222 and a display 224 coupled to bus 208. Collectively, these elements are intended to represent a broad category of computer hardware systems, including but not limited to general purpose computer systems based on the x86-compatible processors manufactured by Intel Corporation of Santa Clara, Calif., and the x86-compatible processors manufactured by Advanced Micro Devices (AMD), Inc., of Sunnyvale, Calif., as well as any other suitable processor.
The elements of hardware system 200 are described in greater detail below. In particular, network interface 216 provides communication between hardware system 200 and any of a wide range of networks, such as an Ethernet (e.g., IEEE 802.3) network, etc. Mass storage 218 provides permanent storage for the data and programming instructions to perform the above described functions, whereas system memory 214 (e.g., DRAM) provides temporary storage for the data and programming instructions when executed by processor 202. I/O ports 220 are one or more serial and/or parallel communication ports that provide communication between additional peripheral devices, which may be coupled to hardware system 200.
Hardware system 200 may include a variety of system architectures; and various components of hardware system 200 may be rearranged. For example, cache 204 may be on-chip with processor 202. Alternatively, cache 204 and processor 202 may be packed together as a “processor module,” with processor 202 being referred to as the “processor core.” Furthermore, certain embodiments of the present invention may not require nor include all of the above components. For example, the peripheral devices shown coupled to standard I/O bus 208 may couple to high performance I/O bus 206. In addition, in some embodiments only a single bus may exist with the components of hardware system 200 being coupled to the single bus. Furthermore, hardware system 200 may include additional components, such as additional processors, storage devices, or memories.
In particular embodiments, the processes described herein may be implemented as a series of software routines run by hardware system 200. These software routines comprise a plurality or series of instructions to be executed by a processor in a hardware system, such as processor 202. Initially, the series of instructions are stored on a storage device, such as mass storage 218. However, the series of instructions can be stored on any suitable storage medium, such as a diskette, CD-ROM, ROM, EEPROM, etc. Furthermore, the series of instructions need not be stored locally, and could be received from a remote storage device, such as a server on a network, via network/communication interface 216. The instructions are copied from the storage device, such as mass storage 218, into memory 214 and then accessed and executed by processor 202.
An operating system manages and controls the operation of hardware system 200, including the input and output of data to and from software applications (not shown). The operating system provides an interface between the software applications being executed on the system and the hardware components of the system. According to one embodiment of the present invention, the operating system is the Windows® Server 2003 (or other variant) operating system available from Microsoft Corporation of Redmond, Wash. However, the present invention may be used with other suitable operating systems, such as the Windows® 95/98/NT/XP/Vista operating system, available from Microsoft Corporation of Redmond, Wash., the Linux operating system, the Apple Macintosh Operating System, available from Apple Computer Inc. of Cupertino, Calif., UNIX operating systems, and the like.
C. Enhanced Write Command Processing
Typically, when a work station 22A, hosting an application, accesses a file stored on file server 38, an open command identifying the file (e.g., path and file name) is transmitted. Responsive to the open command, the EFG storage cache 26A fetches a current copy of the file from the remote file server 38. In a particular implementation, the EFG storage cache 26A receives the open command (802), as
The remote cache server 36, responsive to the fetch command, retrieves a copy of the identified file from file server 38. As described above, leases for the file may also be obtained. If the fetch command includes checksums, the cache server 36 computes checksums for the retrieved file at block boundaries and compares its computed checksums to the checksums received from the EFG storage cache 26A. Based on these comparisons, the cache server 36 generates a set of commands that the EFG storage cache 26A can use to construct a current version of the file. The set of commands can generated by the cache server 36 can include commands that instruct the EFG storage cache 26A to copy an identified block (if the checksums computed by both ends match), as well as data and commands instructing that the data be inserted at identified offsets in the constructed file. In a particular implementation, the set of commands are generated (or at least processed by the EFG storage cache 26A) in a sequential or streaming manner in that the current cached copy of a file is constructed starting at the beginning of the file and proceeding to the end. In a particular implementation, the cache server 36 transmits the set of commands in one or more command packets, where the commands are sequentially ordered based on file offsets. As discussed below, during a fetch process, the EFG storage cache 26A sequentially reconstructs the file as the command packets are received.
Of course, while the fetch and reconstruction process is being executed relative to a file, an application that initially opened the file may transmit one or more commands, such as read commands and write commands. As to read commands, the EFG storage cache 26A may satisfy the command with a cached version of the file, if the fetch process has reached the data segment identified in the read command, or pass the read command on to the file server 38. Both read and write commands identify an offset (a byte location in a file) and a length. Write commands also include the data to be written starting at the offset.
Otherwise, if a fetch of the file is currently in process (604), the EFG storage cache 26A, rather than holding the write command until the fetch completes, passes the write command thru to the cache server 36, which passes the command thru to the target file server 38 for execution (606). In addition, the EFG storage cache 26A executes the write command on the cached version of the file, and returns an acknowledgement to the requesting application (608). Notably, the cached version of the file is not marked as dirty since the write command is passed thru to the target file server 38. In this manner, an acknowledgement can be transmitted to the requesting application with lower latency (e.g., without having to wait for an acknowledgement from the target file server 38 or the fetch process to complete), thereby improving performance and reducing the possibility of time outs or other errors.
The EFG storage cache 26A also selectively stores the write command information in a write history associated with the cached version of the file depending on the progress of the fetch process. In a particular implementation, the EFG storage cache 26A computes the ending location (End_of_Write) in the file associated with the write command by adding the length to the offset identified in the write command (610). As discussed above, as the fetch process executes for the file, the file is sequentially reconstructed. A LastFetchedOffset variable is incremented as the file re-construction commands generated by the server cache 36 are processed by the EFG storage cache 26A to indicate the progress of the fetch command. The EFG storage cache 26A compares the ending location of the write command to the offset in the file to where the fetch process has reached (LastFetchedOffset) (612). If the fetch process has not reached the ending location (End_of_Write) of the write command, the EFG storage cache 26A stores the offset and length of the write command in a write history associated with a cache record for the file (614). This stored write command information is used when command packets including file construction commands are received at the EFG storage cache 26A, as described below. In one implementation, write commands stored in the write history that correspond to overlapping or directly adjacent data segments can be collapsed into one write command entry. For example, a first write command (offset=8192 byte, length=100 bytes) and a second write command (offset=8292, length=500 bytes) can be collapsed into a single entry (offset=8192, length=600 bytes). In addition, the write commands may also be stored in order of increasing offset values, which facilitates fetch processing and determining overlaps between reconstruction commands and stored write commands (see below).
As
Still further, in a particular implementation, the EFG storage cache 26A may also selectively delete one or more entries in the write history associated with the cached version of the file. In a particular implementation, if one or more captured write commands in the write history (relative to segment location—offset and length) are completely within LastFetchedOffset value (714), the EFG storage cache 26A deletes the one or more identified entries from the write history (716). The remaining write command entries, if any, can be flushed when the fetch process is completed. As one will appreciate, the foregoing allows write commands issued by requesting applications to be processed and acknowledged to improve performance, while also maintaining file consistency between the cached copy of the file and the master copy on the data center file server.
Particular embodiments of the above-described process might be comprised of instructions that are stored on storage media. The instructions might be retrieved and executed by a processing system. The instructions are operational when executed by the processing system to direct the processing system to operate in accord with the present invention. Some examples of instructions are software, program code, firmware, and microcode. Some examples of storage media are memory devices, tape, disks, integrated circuits, and servers. The term “processing system” refers to a single processing device or a group of inter-operational processing devices. Some examples of processing devices are integrated circuits and logic circuitry. Those skilled in the art are familiar with instructions, storage media, and processing systems.
Those skilled in the art will appreciate variations of the above-described embodiments that fall within the scope of the invention. In this regard, it will be appreciated that there are many possible orderings of the steps in the process described above and many possible modularizations of those orderings. Further, in embodiments where processing speed is not determinative, the process might run in the control plane rather than the data plane. As a result, the invention is not limited to the specific examples and illustrations discussed above, but only by the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
4614841 | Babecki et al. | Sep 1986 | A |
4706126 | Kondo | Nov 1987 | A |
5196943 | Hersee et al. | Mar 1993 | A |
5850521 | Morganti et al. | Dec 1998 | A |
6108707 | Wiese | Aug 2000 | A |
6453354 | Jiang et al. | Sep 2002 | B1 |
RE38410 | Hersch et al. | Jan 2004 | E |
6728716 | Bhattacharya et al. | Apr 2004 | B1 |
7024527 | Ohr | Apr 2006 | B1 |
7082494 | Thelin et al. | Jul 2006 | B1 |
7165041 | Guheen et al. | Jan 2007 | B1 |
7500246 | Saake et al. | Mar 2009 | B2 |
7647526 | Taylor | Jan 2010 | B1 |
7693962 | Serlet et al. | Apr 2010 | B2 |
7702851 | Satoyama et al. | Apr 2010 | B2 |
7822728 | Chandler et al. | Oct 2010 | B1 |
8055702 | Lango et al. | Nov 2011 | B2 |
8104044 | Scofield et al. | Jan 2012 | B1 |
8181180 | Anderson et al. | May 2012 | B1 |
8489811 | Corbett et al. | Jul 2013 | B1 |
8495204 | Shin et al. | Jul 2013 | B2 |
20010004754 | Murayama | Jun 2001 | A1 |
20020009079 | Jungck et al. | Jan 2002 | A1 |
20020010743 | Ryan et al. | Jan 2002 | A1 |
20020013853 | Baber et al. | Jan 2002 | A1 |
20020078241 | Vidal et al. | Jun 2002 | A1 |
20020107999 | Zimmermann et al. | Aug 2002 | A1 |
20020120763 | Miloushev et al. | Aug 2002 | A1 |
20020169818 | Stewart et al. | Nov 2002 | A1 |
20030009538 | Shah et al. | Jan 2003 | A1 |
20030105734 | Hitchen et al. | Jun 2003 | A1 |
20030158836 | Venkatesh et al. | Aug 2003 | A1 |
20030167317 | Deen et al. | Sep 2003 | A1 |
20030200193 | Boucher | Oct 2003 | A1 |
20040010654 | Yasuda et al. | Jan 2004 | A1 |
20040030731 | Iftode et al. | Feb 2004 | A1 |
20040049636 | Campbell et al. | Mar 2004 | A1 |
20040257931 | Kudou et al. | Dec 2004 | A1 |
20050144602 | Ngai et al. | Jun 2005 | A1 |
20050171951 | Farmer | Aug 2005 | A1 |
20050187993 | Selman et al. | Aug 2005 | A1 |
20050289152 | Earl et al. | Dec 2005 | A1 |
20060173985 | Moore | Aug 2006 | A1 |
20060224560 | Makita | Oct 2006 | A1 |
20060248040 | Tolvanen et al. | Nov 2006 | A1 |
20070050778 | Lee et al. | Mar 2007 | A1 |
20070061487 | Moore et al. | Mar 2007 | A1 |
20070168542 | Gupta et al. | Jul 2007 | A1 |
20070174428 | Lev Ran et al. | Jul 2007 | A1 |
20070198685 | Phatak | Aug 2007 | A1 |
20080046920 | Bill | Feb 2008 | A1 |
20080077655 | Ganapathy et al. | Mar 2008 | A1 |
20080091739 | Bone et al. | Apr 2008 | A1 |
20080229023 | Plamondon | Sep 2008 | A1 |
20090006936 | Parker et al. | Jan 2009 | A1 |
20090063859 | Maeda | Mar 2009 | A1 |
20090150518 | Lewin et al. | Jun 2009 | A1 |
20120198161 | Chachad et al. | Aug 2012 | A1 |
Entry |
---|
Spasojevic et al., “An Empirical Study of a Wide-Area Distribution File System”, 1996. |
Santry et al., “Deciding when to forget in the Elephant file system”, 1999. |
Kistler et al., “Disconnected Operation in the Coda File System”, 1992. |
Skala, “Journal of .NET Technologies 2005”, 2005. |
Microsoft Corp., “Microsoft Computer Dictionary”, 5th edition 2002, p. 200. |
Van Hensbergen et al., “Dynamic Policy Disk Caching for Storage Networking”, 2006. |
Merriam-Webster online dictionary, “execute”, 2014. |