Embodiments of the invention are defined by the claims below, not this summary. A high-level overview of embodiments of the invention are provided here for that reason, to provide an overview of the disclosure.
In a first aspect, a set of computer-useable instructions provides a method of allocating file server resources. A user initiates an operation, which causes the user's client computing device to communicate requests to a file server. The file server identifies the type of operation being initiated by monitoring the requests and allocates file server resources accordingly.
In a second aspect, a set of computer-useable instructions provides an exemplary method of allocating file cache buffer resources for uploading a file from a client computing device. An illustrative step includes receiving a preliminary input/output (I/O) request that indicates that the client is initiating a write operation. A file cache buffer is allocated and prepared for receiving data directly from the client. After the file cache buffer is allocated, a write request is received and the data is written directly into the file cache buffer that was prepared.
In another aspect, a set of computer-useable instructions provides an illustrative method of allocating read queue resources for downloading a file to a client computing device. An indication that a user associated with the client device is initiating a download operation is received. The file associated with the resource allocation is identified and a read queue is allocated and prepared such that read data can be received directly into the read queue.
In a fourth exemplary aspect, a set of computer-useable instructions provides an illustrative method for allocating a directory queue for receiving pre-fetched directory information such as metadata. An indication is received that a user is initiating a directory browse operation. The particular directory is identified and a portion of a directory queue is allocated and prepared for receiving metadata from the directory. A directory browse request is received and a portion of the directory is enumerated within the prepared directory queue.
Illustrative embodiments of the present invention are described in detail below with reference to the attached drawing figures, which are incorporated by reference herein and wherein:
Embodiments of the present invention provide systems and methods for allocating file server resources for predicted operations based on previously monitored network traffic.
Throughout the description of the present invention, several acronyms and shorthand notations are used to aid the understanding of certain concepts pertaining to the associated system and services. These acronyms and shorthand notations are intended to help provide an easy methodology of communicating the ideas expressed herein and are not meant to limit the scope of the present invention. The following is a list of these acronyms:
The invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The invention may be practiced in a variety of system configurations, including hand-held devices, consumer electronics, general-purpose computers, more specialty computing devices, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
Computer-readable media include both volatile and nonvolatile media, removable and nonremovable media, and contemplates media readable by a database, a server, and various other network devices. By way of example, and not limitation, computer-readable media comprise media implemented in any method or technology for storing information. Examples of stored information include computer-useable instructions, data structures, program modules, and other data representations. Media examples include, but are not limited to information-delivery media, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD), holographic media or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage, and other magnetic storage devices. These technologies can store data momentarily, temporarily, or permanently.
An exemplary operating environment in which various aspects of the present invention may be implemented is described below in order to provide a general context for various aspects of the present invention. Referring initially to
Computing device 100 includes a bus 110 that directly or indirectly couples the following devices: memory 112, one or more processors 114, one or more presentation components 116, I/O ports 118, I/O components 120, and an illustrative power supply 122. Bus 110 represents what may be one or more busses (such as an address bus, data bus, or combination thereof). Although the various blocks of
Memory 112 includes computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, nonremovable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 100 includes one or more processors that read data from various entities such as memory 112 or I/O components 120. Presentation component(s) 116 present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc.
I/O ports 118 allow computing device 100 to be logically coupled to other devices including 1/0 components 120, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, keyboard, pen, voice input device, touch input device, touch-screen device, interactive display device, or a mouse.
Turning to
Clients 210 include computing devices such as, for example, the exemplary computing device 100 described above with reference to
File server 212 includes a server that provides storage for files, shared files and other data. As used herein, files can include data files, documents, pictures, images, databases, movies, audio files, video files, and the like. File server 212 can also manage access permissions and rights to stored files. In an embodiment, file server 212 is a dedicated file server. In another embodiment, file server 212 is a non-dedicated file server, and in further embodiments, file server 212 can be integrated with a client 210 or other computing device. File server 212 can include an internet file server, particularly where network 215 is the internet or other wide area network (WAN). In some embodiments, where network 215 is a local area network (LAN), file server 212 can be accessed using File Transfer Protocol (FTP). In other embodiments, file server 212 can be accessed using other protocols such as, for example, Hyper Text Transfer Protocol (HTTP) or Server Message Block (SMB) protocol. In a further embodiment, file server 212 can include a distributed file system such as the Distributed File System (DFS) technologies available from Microsoft Corporation of Redmond, Wash.
As further illustrated in
Each of these elements of the networking environment 200 is also scalable. That is, for example, file server 212 can actually include a number of file servers, operating in parallel with a load balancer such that large amounts of traffic may be managed. In some embodiments, file server 212 includes other servers that provide various types of services and functionality. File server 212 can be implemented using any number of server modules, devices, machines, and the like. In some embodiments, there is only one client 210, whereas in other embodiments, there are several clients 210. In a further embodiment, there are a large number of clients 210. Nothing illustrated in
Turning now to
According to embodiments of the present invention, storage component 320 includes a file system that facilitates the maintenance and organization of stored files 322. Nothing in this description is intended to limit the type of file system utilized in embodiments of the present invention, however examples of such file systems include the file allocation table (FAT) file system, the New Technology File System (NTFS), and the Distributed File System (DFS), each of which is available in various products from Microsoft Corporation of Redmond, Wash.
As illustrated in
With continued reference to
To perform operations such as these, clients communicate I/O requests that include syntax that file server 300 recognizes. File server 300 performs operations in response to the recognized syntax. Examples of I/O requests include write requests, read requests, disk allocation requests, and the like. I/O requests can also include preliminary communications that typically occur before write requests, read requests, and the like. For example, I/O requests may instruct file server 300 to allocate a particular amount of space on a disk for writing files. For example, I/O requests can include SetEndOfFile requests and SetFileAllocationInformation requests. In other examples, I/O requests can instruct file server 300 to read ahead files, data, or metadata into read queue 316 or directory queue 318 so that the files or data are available immediately when the client communicates a read request.
The syntax associated with I/O requests can be recognized by prediction module 310, which determines the type of operation a user is attempting to initiate by causing the client to communicate the I/O requests. For example, when a SetEndOfFile request is received from a client, a disk manager or file system manager may allocate a portion of disk memory for storing the file. In addition, the SetEndOfFile request can also be received by prediction module 310, which recognizes, based on the type of request, that the client is initiating an upload sequence. Other information provided simultaneously with or subsequent to the SetEndOfFile request can be used by the prediction module 310 to identify the file and/or other data that will be operated on during the operation.
As illustrated in
In an embodiment, allocation manager 312 allocates file cache buffers 314 in response to receiving preliminary I/O requests corresponding to upload operations. While cache managers may allocate disk space or file cache buffers 314 (e.g., virtual disk space, an mdl, etc.) incident to receiving a write request, allocation manager 312 allocates file cache buffers in response to receiving I/O requests that are preliminary to a write request. Preliminary requests can include, for example, SetEndOfFile requests and SetAllocationInformation requests. Once the write request is received, the data can be written directly into the allocated file cache buffer 314. This functionality allows for optimization of data buffering by receiving data into a buffer 314 in one step and then lazily writing the data to disk 320 in a next step rather than waiting for the write request to be received and storing the data in an intermediate buffer while allocating a file cache buffer 314 into which the data is later copied. Consequently, the amount of time that file server 300 is engaged in copying data between buffers is reduced, which improves responsiveness and throughput.
According to another embodiment of the present invention, allocation manager 312 allocates read queue 316 incident to receiving I/O requests corresponding to download operations from a client. Read queue 316 can include an asynchronous work item queue, a buffer, a cache, virtual memory, or some other portion of memory (i.e., RAM) in which data can be maintained in preparation for a read request. Although read queues may be populated with data by cache managers, disk managers, and the like, this typically only occurs in response to an actual read request or a pattern of read requests. In an embodiment of the present invention, allocation manager 312 allocates read queue 316 in response to preliminary I/O requests that are received prior to receiving read requests. Accordingly, when a read request is received in the present invention, the requested data can be read directly into read queue 316, which has already been prepared by allocation manager 312.
In a further embodiment, allocation manager 312 allocates directory queue 318. Directory queue 318 can include an asynchronous work item queue, a buffer, a cache, virtual memory, or some other portion of memory (i.e., RAM) in which data and/or metadata can be maintained in preparation for enumerating a directory for browsing by a user. In an embodiment, allocation manager 312 allocates and prepares directory queue 318 in response to preliminary I/O requests received prior to receiving a directory browse request. Then, when a request is received for browsing a directory, allocation manager 312 can populate directory queue 318 with data or metadata from the file index 324 and the client can read the directory directly from directory queue 318.
Although several specific embodiments of allocation manager 312 are described in detail above, the descriptions herein are not intended to limit the functions that allocation manager 312 can perform to only those described in detail herein. Allocation manager 312 can be configured to allocate any number of types of file server resources so that when operation requests are received, the operations can be performed without intermediate caching or buffering.
Turning now to
Allocation manager 414, incident to receiving the indication that an upload operation is being initiated, allocates, as shown at 430, a first file cache buffer 418, preparing the first file cache buffer 418 for receiving data. Additionally, if the file to be uploaded is larger than the capacity of the first file cache buffer 418, allocation manager 414 allocates, as shown at 432, a second file cache buffer 419. As indicated at 436 in
Depending on the size of the file to be uploaded, allocation manager 414 may allocate a third file cache buffer 420, as indicated at 434. In an embodiment, the third file cache buffer 420 is allocated before write data 422 is received. In another embodiment, the third file cache buffer 420 is allocated after the first file cache buffer is full. In other embodiments, the third file cache buffer 420 is allocated only when necessary, which may be determined by allocation manager 414 at any point in the process illustrated in
Turning now to
Prediction module 512 recognizes, based on preliminary I/O request 522, that the user is initiating a read operation with respect to a particular stored file 520. Prediction module 512 provides allocation manager 514 with an indication, as shown at 524, that the user is initiating a read operation with respect to the stored file 520. Allocation manager 514 can, in an embodiment, determine the size of the stored file 520 and allocate resources accordingly. As illustrated at 526 in
With reference to
As illustrated at 624, prediction module 612 provides information corresponding to that indication to allocation manager 614. Incident to receiving the indication that the user has initiated a directory browse operation, allocation manager 614 allocates, as shown at 626, a directory queue 616 and prepares the directory queue 616 for receiving an enumerated directory based on metada included in a file index 620 maintained on a disk 618. Responsive to a directory browse request (not shown), directory output 630 is read into the directory queue 616, as indicated at 628. The directory output 630 can then be accessed directly, as shown at 632, by a client, which reads the directory output 630 from the directory queue 616.
To recapitulate, we have described systems and methods for allocating file server resources in response to predicting operations requested by users based on previous network traffic data (e.g., preliminary I/O requests). Turning now to
Additionally, according to one embodiment, the preliminary request includes a single request that is operable to initiate an operation. In another embodiment, the preliminary request includes a number of requests associated with initialization of an operation. In still a further embodiment, the request includes one of a number of requests associated with initialization of an operation. It should be understood that, although this communication is referred to as a request herein, the request can include, in various embodiments, a command, an instruction, or any other type of communication from a client computing device that corresponds to initiation of an operation on a file server.
At step 712, the operation is identified based on the preliminary request. In an embodiment, the operation comprises a data transfer that utilizes file server resources. Each of the exemplary operations described above can be characterized as a data transfer operation, as each one of the operations includes a transfer of file data or directory data between a client computing device and a file server. In other embodiments, further operations that involve data transfer between a client computing device and a file server can also be included within the ambit of the illustrative methods described herein, so long as the operation involves the use of file server resources. For example, the operation can include modifying a file, modifying an attribute associated with a file, returning query results, and the like.
With continued reference to
At step 716, a request for the operation is received from the client. In an embodiment, the request for the operation is a write request. In another embodiment, the request for the operation is a read request. In still a further embodiment, the request for the operation is a directory browse request. In a final illustrative step 718, the file server performs the operation. As indicated above, performing the operation includes, in various embodiments, causing data to be transferred to a client computing device. In other embodiments, performing the operation includes receiving data transferred from a client computing device. In still further embodiments, performing the operation can include manipulating data maintained on the file server, copying files maintained on the file server, providing content for display on a display device associated with the client computing device, or a number of other operations.
Turning to
Turning to
At step 814, a destination is determined for the file. In an embodiment, determining a destination for the file includes allocating space on a disk. At step 816, which may happen simultaneously to, or very close in time to step 814, a first file cache buffer is prepared. In an embodiment, the first file cache buffer is prepared by allocating a first portion of memory (e.g., RAM) associated with the first file cache buffer. In various embodiments of the invention, the first file cache buffer is prepared before a write request is received, and is therefore ready to receive data directly from a write incident to receipt of a write request. At step 818, a second file cache buffer is prepared.
In one embodiment, the second file cache buffer is prepared before any data is written into the first file cache buffer. In another embodiment, the second file cache buffer is prepared only after the first file cache buffer begins to fill up with data. In various embodiments, the second file cache buffer is prepared before the first file cache buffer is full, allowing a seamless transition from writing data into the first file cache buffer to writing data into the second file cache buffer. At step 820, a write request is received from the client and, incident to receiving the write request, data is written directly into the first file cache buffer, as shown at step 822. When the first file cache buffer is full, data is written into the second file cache buffer, as shown at a final illustrative step 824.
Because data is written into the first file cache buffer directly, an intermediate cache or buffer such as, for example, a receiving buffer, a network buffer, an output cache or an input cache, is not necessary. Accordingly, this process also does not require copying the data from an intermediate buffer into the file cache buffer.
Turning now to
As shown at step 844, a second determination is made whether the file cache buffer is available. If the file cache buffer is available, the data is copied from the intermediate buffer into the file cache buffer, as shown at step 846. If the file cache buffer is not available, a file cache buffer must first be allocated, as shown at step 848, before the data is copied into the file cache buffer at step 846.
Turning now to
Similarly, at step 916, a second portion of memory in the read queue is allocated. As shown at step 918, a read request is received from the client. Incident to receiving the read request, as shown at step 920, a first portion of the file is read into the first portion of memory such that the first portion of the file can be provided directly to the client device. At a final illustrative step 922, a second portion of the file is read into the second portion of memory after the first portion of memory is full. In some embodiments, all of the read data may fit within the first portion of memory. In that case, it would not be necessary to write into a second buffer,.
With reference now to
Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the spirit and scope of the present invention. Embodiments of the present invention have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to those skilled in the art that do not depart from its scope. A skilled artisan may develop alternative means of implementing the aforementioned improvements without departing from the scope of the present invention.
For example, in one embodiment, a file server may include several allocation managers, each configured to allocate one type of resource. In another embodiment, a file server may be implemented that includes a single allocation manager that is configured to allocate each type of resource available. In further embodiments, various combinations of file server resources may be handled by an allocation manager, while other combinations of resources may be handled by an additional allocation manager.
Further, in an embodiment, the amount of file cache memory that can be used in accordance with this invention can be limited to prevent denial of service attacks. In one embodiment, the memory is limited at a global level and in other embodiments, the memory can be limited on a per-connection or per-file level. In still a further embodiment, denial of service attacks can be prevented by releasing buffers that have not been written to within a specified amount of time.
It will be understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims. Not all steps listed in the various figures need be carried out in the specific order described.