The present invention relates to client-server systems and, more particularly, to cache nodes in client-server systems.
Packet networks enable the exchange of data between remote computing systems. In end computing systems, data are organized, stored, and handled by computing applications as data files. Network elements transfer data files across the network using data blocks called packets. A network path is a sequence of network links between two remote network elements that supports the transfer of data files between those network elements. Many network applications rely on a client-server model, where one server end system (SES) operates as the source of data files for a plurality of client end system (CES) instances.
In typical network settings, the transfer of a data file from an SES to a CES can occur only at times when the following two conditions are simultaneously satisfied: (i) Network Path Integrity (NPI) condition, wherein a working network path exists between the SES and the CES, and (ii) Client End System Availability (CESA) condition, wherein the CES is powered on and capable of operation.
In many cases, the requirement to have both conditions simultaneously satisfied in order to conduct the data-file transfer may lead to inconvenient delays in the delivery of critical data and interfere with the normal operation of the CES, causing dissatisfaction to the CES user and security threats to the organization that uses the network application and relies on the quick distribution of data for its safe operation. Specifically in the case of enterprise information technology (IT) applications, the severity of the security threat grows with the time that elapses between the arrival of a new data file at the SES and the completion of its transfer to the last CES.
Caching is a method broadly used in packet networks to stage the distribution of data files from a server end system to a plurality of client end systems. With caching, a data file is moved from the server to a cache node in the network in a single data-file transfer, independently of the number of clients. If the cache node is properly located along the network paths from the server to the many clients, temporarily storing the data file in the cache node can dramatically reduce the consumption of network resources needed to complete the distribution of the data file to all clients. Traditional caching works well in networks where violations of the NPI and CESA conditions are infrequent, but may be inadequate when deployed in networks where the violations are common. If the cache node does not reside in the portion of network path between the CES and the network segment that most frequently violates the NPI condition, called the sporadic network segment (SNS), then the cache node cannot remove the requirement that both the NPI and the CESA conditions be satisfied at the same time in order for a data file transfer to take place.
In view of the foregoing, an embodiment herein provides a method for expediting the transfer of data files between a Server End System (SES) and a plurality of Client End Systems (CES), the method comprising steps of the SES transferring one data file or a plurality of data files from the SES to a Server Cache Node (SCN) connected to the SES; sending a list of data files cached in the server cache node to a plurality of Client Cache Nodes (CCN), wherein each of the plurality of CCNs is connected to each of the plurality of CESs; at least one of the CCNs sending a request to the SCN for data files cached in the SCN, based on the comparison between a list of data files cached in the CCN with the list of the data files cached in the SCN; the SCN sending the requested data files to the CCN that sent the request to the SCN; and the CCN transferring the received data files to the CES. The CCN may be connected to the SCN through a Sporadic Network Segment (SNS). The SCN and plurality of CCNs are in operational state independently of the availability state of the SES, the CES, or the SNS. The method further comprises steps of each of the plurality of CCNs maintaining a list of data files cached in the SCN; and each of the plurality of CCNs requesting the latest version of the list from the SCN at pre-determined intervals of time. The SCN may be connected to the SES by one of a Universal Serial Bus (USB) connection; a Personal Computer Memory Card International Association (PCMCIA) connection; an ExpressCard connection; or a highly available network path. The CCN may be connected to the CES by one of a Universal Serial Bus (USB) connection; a Personal Computer Memory Card International Association (PCMCIA) connection; an ExpressCard connection; or a highly available network path. Each of the plurality of CCNs checks for availability of the CES to which the CCN is connected, before transferring the data files to the CES. Each of the plurality of CCNs further performs steps of storing the data files if the CES to which the CCN is connected is not available and delivering data files to the CES when the CES requests them.
Embodiments further disclose a Server Cache Node (SCN) for expediting the transfer of data files between a Server End System (SES) and a plurality of Client End Systems (CES), the SCN comprising at least one means adapted for caching data files received from the SES, wherein the data files have to be sent to at least one of the CESs; sending a notification to a plurality of Client Cache Nodes (CCNs) when new data files are cached in the SCN, wherein the notification indicates the presence of new data files cached in the SCN; sending a list of data files cached in the SCN to the CCN on receiving a request for the list of data files from the CCN; and sending a data file to the CCN on receiving a request for the data file from the CCN. The SCN is adapted to maintain a list of data files cached in the SCN, wherein the list has details of the data files cached in the SCN and the list is updated when new data files are cached in the SCN. The SCN is adapted to retrieve a list of CCNs in a sleep state; and send a message to each of the CCN instances in a sleep state to bring the CCN instances in a sleep state to an operational state. The SCN may cache the data files in a data storage means. The SCN may be connected to the SES by one of a Universal Serial Bus (USB) connection; a Personal Computer Memory Card International Association (PCMCIA) connection; an ExpressCard connection; or a highly available network path. The SCN may be connected to the plurality of CCNs through a Sporadic Network Segment (SNS). The SCN is in operational state independently of the availability state of the SES or the SNS.
Embodiments herein also disclose a Client Cache Node (CCN) for expediting the transfer of data files between a Server End System (SES) and a Client End System (CES), the CCN comprising at least one means adapted for sending, at pre-determined intervals of time, a request for a latest list of data files to a Server Cache Node (SCN); receiving the list of data files from the SCN; sending a request to the SCN for selected data files cached in the SCN, based on the list of data files; and the CCN transferring the data files to the CES. The CCN may be connected to the SCN through a Sporadic Network Segment (SNS). The CCN may be in operational state independently of the availability state of the CES or the SNS. The CCN is adapted to maintain a list of data files cached in the CCN, wherein the list has details of the data files cached in the CCN and the list is updated when new data files received from the SCN are cached in the CCN. The CCN sends the request to the SCN by comparing the list of data files cached in the SCN with the list of data files cached in the CCN. The CCN caches the data files in a data storage means. The CCN may enter a low-power sleep state when the CES has been not available for a first pre-determined period of time and the CCN has been idle for a second pre-determined period of time, and come back to operational state on receiving a message from the SCN or when the CES becomes available again. The CCN may be connected to the CES by one of a Universal Serial Bus (USB) connection; a Personal Computer Memory Card International Association (PCMCIA) connection; an ExpressCard connection; or a highly available network path. The CCN further stores the data files, and transfers the data files to the CES when the CES requests them.
These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings.
The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
a and 7b are a flowchart depicting a method of operation of a client cache node, according to an embodiment herein;
a, 8b and 8c are a flowchart depicting a method of operation of a server cache node, according to an embodiment herein; and
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The embodiments herein disclose a method and system for expediting the transfer of data files between a Server End System (SES) and a Client End System (CES) by proactively transferring data files between a Server Cache Node (SCN) and a Client Cache Node (CCN), wherein a highly available network path exists between the SES and the SCN, a highly available network path exists between the CCN and the CES, and a highly available network path may not exist between the SCN and the CCN. The CES then retrieves the transferred data files from the CCN. Referring now to the drawings and more particularly to
When a CES 101 requires a data file from the SES 105, the CES 101 sends a request for the data file to the CCN 102. The requested data file may be any data file which may be stored in a computing network. If the CCN 102 has the requested data file, the CCN 102 delivers the data file to the CES 101. If the CCN 102 does not have the requested data file, the CCN 102 checks if the SNS connecting the CCN 102 to the SCN 104 is available. If the SNS is available, the CCN 102 forwards the request to the SCN 104 through the SNS. If the data file is present in the SCN 104, the SCN 104 delivers the data file to the CCN 102. If the SCN 104 does not have the requested data file, the SCN 104 forwards the request to the SES 105. On receiving the request, the SES 105 sends the data file to the SCN 104. The SCN 104 then sends the data file to the CCN 102 through the SNS. On receiving the data file from the SCN 104, the CCN 102 checks if the CES 101 is available. The CES 101 may not be available as it may be in a sleep state, or a hibernation state, or in some other state where the CES 101 is not capable of communication. If the CES 101 is available, the CCN 102 transfers the data file to the CES 101. If the CES 101 is not available, the CCN 102 stores the data file, waiting for the next request from the CES 101. When the CCN 102 receives a request from the CES 101 for a data file that the CCN 102 has previously stored, the CCN 102 fetches the data file from the storage means and sends the data file to the CES 101. If the CES 101 has been not available for a pre-determined period of time and the CCN 102 has been idle for a pre-determined period of time, the CCN 102 may enter a low-power sleep state. If there are CCN 102 instances in sleep state, the SCN 104 retrieves the list of CCN 102 instances in the sleep state. When the SCN 104 receives data files from the SES 105 to be sent to a CES 101, the SCN 104 may send a message to the CCN 102 instances that are in sleep state to bring the CCNs 102 to the operational state. For example, the SCN 104 may send a Short Message Service (SMS) to bring the CCN 102 to the operational state. The CCN 102 may also return to operational state when the CCN 102 receives a data packet from the SCN 104.
If there are instances of CCN 102 that may be in sleep state, the SCN 104 may retrieve the list of CCN 102 instances that are in the sleep state. When the SCN 104 receives new data files from the SES 105 to be sent to the CES 101 instances and a new version of the DFL has been created in the SCN 104, the SCN 104 sends messages to the CCN 102 instances that are in sleep state to bring to the operational state the CCN 102 instances that are in sleep state.
Only data files that are received from a selected set of sources are entered in the DFL 502. The SCN 104 maintains in an SCN DM policy 504 a list of supported sources such that data files that originate from them may have associated entries in the DFL 502. If any source not included in the SCN DM policy 504 sends data files to the SCN 104, those data files may be stored in the memory 403. However, no entry is added to the DFL 502 for those data files. If the SCN 104 receives data files from a source listed in the SCN DM policy 504, then the SCN DM 401 adds entries for those data files to the DFL 502. A Data File List Manager (DFLM) 505 maintains the DFL 502 based on information received from the data storage interface 501 and contained in the SCN DM policy 504. Data files are stored in the SCN 104 for a pre-determined duration of time. If there is any data file that has been cached in the memory 403 for a period of time longer than the pre-determined duration of time, then the SCN DM 401 removes the data file from the memory 403 through the data storage interface 501.
a and 7b are a flowchart depicting a method of operation of a client cache node. A CCN 102 is a network cache node included in the network path between the CES 101 and the SNS. The CCN 102 remains in operational state independently of the availability state of the CES 101 and of the SNS. The CCN 102 establishes connectivity with the SCN 104 when the SNS is available and independently of the availability state of the CES 101. The CCN 102 maintains connectivity with the CES 101 as long as the CES 101 remains available and independently of the availability state of the SNS. The DFLM 306 maintains the DFL 302 based on information received from the data storage interface 301 and from the SCN DM interface 303. The DFLM 306 verifies (701) with the data storage interface 301 if the memory 203 has any data files that have been already downloaded by the CES 101. If there are files in the memory 203 that have been downloaded by the CES 101, then the DFLM 306 instructs the data storage interface 301 to delete (702) the files from the memory 203. Then the DFLM 306 marks (703) the entries in the DFL 302 as “delivered” for the deleted files. If there are no files in the memory 203 that have been downloaded by the CES 101, then the DFLM 306 verifies with the SCN DM interface 303 if a new version of the DFL 502 has been received (704) from the SCN 104. If a new version of the data files list has not been received, then the DFLM 306 verifies (707) with the SCN DM interface 303 if a new DFL notification has been received. If a new DFL notification has not been received by the SCN DM interface 303, the DFLM 306 checks (708) to determine if the request timer 305 has expired. The request timer 305 helps in keeping track of the time elapsed since the generation of the last request to the SCN 104 for the DFL 502. The DFLM 306 sets (709) the request timer 305, then the DFLM 306 instructs the SCN DM interface 303 to send (710) a request to the SCN 104 for the latest version of the DFL 502.
If a new version of the DFL 502 has been received, the DFLM 306 checks to determine (705) if there are any entries in the DFL 302 that are not included in the new version of the DFL 502 received from the SCN 104. If there are any entries in the DFL 302 that are not included in the new version of the DFL 502 received from the SCN 104, then the DFLM 306 removes (706) those entries from the DFL 302. The DFLM 306 then checks (711) to determine if there are any entries in the latest version of the DFL 502 received from the SCN 104 that are not included in the local version of the DFL 302. If there are any entries in the latest version of the DFL 502 received from the SCN 104 that are not included in the local version of the DFL 302, then the DFLM 306 adds (712) the new entries to the DFL 302. The DFLM 306 instructs the CCN DM interface 503 to generate (713) a request to the CCN CM 202 for the data files corresponding to the new entries added to the DFL 302. Then the DFLM 306 verifies (714) with the data storage interface 301 if there is sufficient memory 203 to accommodate the new data files. If the memory space is not sufficient to accommodate the new data files, then the DFLM 306 communicates to the data storage interface 301 the need for deleting some data files from the memory 203. The data storage interface 301 instructs the CCN DM 201 to identify and remove data files from the memory 203. If there is sufficient space to accommodate the new data files, then the data files received from the SCN 104 are stored in the memory 203. The various actions in method 700 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in
a, 8b and 8c are flowcharts depicting a method of operation of a server cache node. There is no overlap between the SNS and the network path between the SES 105 and the SCN 104. The SCN 104 is operational with high availability and has persistent network connectivity with the SES 105. The SCN 104 is included in the network paths between the SES 105 and the CES 101 instances connected to the SES 105. The SCN 104 obtains data files from the SES 105, stores the obtained data files and sends the data files to the CES 101 instances through the CCN 102 instances and the respective SNS instances. If the CES 101 wants to send some data files to the SES 105, the SCN 104 obtains the data files, stores the obtained data files and transmits the data files to the SES 105. If the used storage space in the memory 403 is above (801) a pre-determined storage threshold, then the SCN DM 401 arranges for the removal (802) of some data files from the memory 403 in order to bring the used storage space in the memory 403 below the pre-determined storage threshold. A data storage interface 501 interfaces with the memory 403 and obtains information about the current memory usage. If any data files have to be deleted from the memory 403, the data storage interface 501 removes the data files from the memory 403. After deleting the data files and bringing the used storage space below the pre-determined storage threshold, the SCN DSN 401 updates the DFL 502 and removes (803) the entries corresponding to the deleted files from the DFL 502. The DFLM 505 verifies with the data storage interface 501 to determine (804) if new data files have been stored in the memory 403. If new data files have been received by the SCN 104, then the DFLM 505 verifies (805) if the data files were received from sources listed in the SCN DM policy 504. If the data file was received from a source listed in the SCN DM policy 504, the DFLM 505 updates the DFL 502 by adding (806) entries to the DFL 502 corresponding to each new data file received. The DFLM 505 starts (807) a timer for each new entry added to the DFL 502. The DFLM 505 then checks to determine (808) if the timer has expired for any of the entries in the DFL 502. If the timer has expired for any of the entries in the DFL 502, then the DFLM 505 removes (809) the entries from the DFL 502 and instructs the data storage interface 501 to delete the corresponding data files from the memory 403.
The DFLM 505 checks to determine (810) if any new entry has been added to the DFL 502. If a new entry has been added to the DFL 502, then the DFLM requests the CCN DM interface 503 to send (811) a notification to the CCN 102, indicating the addition of a new entry to the DFL 502. On receiving the notification, the CCN 102 may request the latest version of the DFL 502 from the SCN 104. The DFLM 505 verifies (812) with the CCN DM interface 503 if a request has been received from the CCN 102. If a request was received from the CCN 102, the DFLM 505 instructs the CCN DM interface to send (813) the latest version of the DFL 502 to the requesting CCN 102. On receiving the latest version of the DFL 502 from the SCN 104, the DFLM 306 in the CCN 102 checks to determine if there are any entries in the latest version of DFL 502 received from the SCN 104 that are not included in the local version of the DFL 302. If there are any entries in the latest version of the DFL 502 received from the SCN 104 that are not included in the local version of the DFL 302, then the CCN 102 sends a request to the SCN 104, requesting the data files corresponding to the new entries added to the DFL 302. On receiving (814) the request from the CCN 102, the SCN 104 sends (815) the requested data files to the CCN 102. The various actions in method 800 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in
Embodiments described allow the transfer of data files between the source and destination systems without the requirement to have both the NPI and CESA simultaneously satisfied. Security updates can be can be completed quickly, dramatically reducing the reaction time of an organization to new security threats, especially when many employees in the organization are equipped with mobile laptops.
The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in
The embodiment disclosed herein specifies a method and system for expediting the transfer of data files between an SES and a plurality of CES instances. The mechanism allows transferring data files between network cache nodes providing a system thereof. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein, such computer readable storage means containing program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment through or together with a code written in e.g. Very high speed integrated circuit Hardware Description Language (VHDL) or another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof, e.g. one processor and two FPGAs. The device may also include means which could be e.g. hardware means like e.g. an ASIC, or a combination of hardware and software means, or at least one microprocessor and at least one memory with software modules located therein. The method embodiments described herein could be implemented in pure hardware or partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.
Number | Date | Country | Kind |
---|---|---|---|
2867CHE2009 | Nov 2009 | IN | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2010/067856 | 11/19/2010 | WO | 00 | 10/25/2012 |