This application claims the priority benefit of Taiwan application serial no. 102100597, filed on Jan. 8, 2013. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
1. Field of the Invention
The present invention generally relates to a method and a system for managing files, in particular, to a method and a system for managing cache files according to a utility rate of a file.
2. Description of Related Art
For the convenience in file management and the saving in storage space at a local end, one of the preferred approaches is to store files altogether in a same device, which is accessible for a plurality of devices. Hence, there exists free or paid network storage space provided by many services so that users are able to upload files such as music, pictures, and videos to a network server. When such files are needed, the files may be downloaded via the network server and displayed or played at the local end.
As a network bandwidth increases, the download speed of multimedia files is able to catch up the playback speed. Thus, while a local end apparatus is downloading a file, the downloaded file may be played simultaneously so that an online video stream may be played in real-time. In such playback approach, a browser is adapted to connect to the uniform resource locator (URL) of a video stream, and an application program interface (API) at a lower layer such as DirectX is called via the browser to assist in decoding the downloaded video stream data in order to recover it back to a video file to be played at the local end.
Nonetheless, network traffic may occur since the network download speed is sometimes affected by factors such as the number of people on a network, the efficiency of a local end apparatus, or the efficiency of a network server. By using the online playback approach, a playback delay may occur due to network traffic and may further affect the user's viewing quality. Therefore, a better file download mechanism needs to be provided such that multimedia files possibly to be played by the user are prefetched so as to improve the playback efficiency of the multimedia files.
Accordingly, the present invention is directed to a method and a system for managing cache files to save storage space at a local end and reduce time for downloading files from a service end.
The present invention is directed to a method for managing cache files, adapted to a local end apparatus managing a file cached from a service end apparatus. In the method, a file is divided into a plurality of segments, and a part of the segments are downloaded and stored from the service end apparatus. Then, the segments of the file to be downloaded are increased or decreased according to a utility rate of the file.
According to an embodiment of the present invention, the step of downloading and storing the part of the segments further includes downloading an original header file of the file from the service end apparatus and generating a new header file according to the downloaded original header file and the part of the segments. The new header file records a size of the file and a position and length information of the downloaded part of the segments with respect to the file.
According to an embodiment of the present invention, the step of downloading and storing the part of the segments further includes analyzing a type of the file so as to obtain necessary information required by the type of the file, and downloading and storing the necessary information.
According to an embodiment of the present invention, after the step of downloading and storing the part of the segments, the method further includes receiving a usage request for the file and accordingly accessing the stored part of the segments and downloading the other segments of the file so as to provide the file requested by the usage request.
According to an embodiment of the present invention, the step of increasing or decreasing the plurality of segments of the downloaded file according to the utility rate of the file includes calculating a cache weight for downloading the file according to the number of times of the file being accessed and a time length from a last accessed time point of the file until present, and accordingly increasing or decreasing the segments of the file to be downloaded.
According to an embodiment of the present invention, the step of calculating the cache weight for downloading the file according to the number of times of the file being accessed and the last accessed time point of the file until present includes accumulating the number of times of the file being accessed within a predetermined time and accordingly increasing the cache weight for downloading the file.
According to an embodiment of the present invention, the step of increasing or decreasing the plurality of segments of the file to be downloaded according to the utility rate of the file includes accumulating the number of times of the file being accessed within a predetermined time, accumulating the time length from the last accessed time point of the file until present and accordingly obtaining a weight adjustment value, and calculating a multiplication of the number of times and the weight adjustment value and using the multiplication as the cache weight for downloading the file.
According to an embodiment of the present invention, the step of downloading and storing the part of the segments further includes determining if cache space for storing the downloaded part of the segments is sufficient. If the cache space is not sufficient, a cache list of a plurality of files stored in the cache space is obtained. Then, the files are ordered according to the cache weight of each of the files and the files in a later portion are deleted for storing the downloaded part of the segments.
Furthermore, the present invention is directed to a system for managing cache files. The management system includes a download module and a cache module. The download module connects to a service end apparatus via a network so as to download and store a part of segments of a file. The cache module increases or decreases the plurality of segments of the file to be downloaded according to a utility rate of the file.
According to an embodiment of the present invention, the download module further downloads an original header file of the file from the service end apparatus. The cache module further generates a new header file according to the downloaded original header file and the part of the segments, wherein the new header file records a size of the file, a position and length information of the downloaded part of the segments with respect to the file.
According to an embodiment of the present invention, the cache module further analyzes a type of the file so as to obtain necessary information required by the type of the file, and downloads and stores the necessary information from the service end apparatus.
According to an embodiment of the present invention, the management system further includes a database module for storing the downloaded part of the segments in a database.
According to an embodiment of the present invention, the management system further includes a management module for receiving a usage request for the file, accordingly accessing the part of the segments stored in the database module, and downloading the other segments of the file from the service end apparatus by using the download module so as to provide the file requested from the usage request.
According to an embodiment of the present invention, the cache module further calculates a cache weight for downloading the file according to the number of times of the file being accessed and a last accessed time point of the file until present, and accordingly increases or decreases the segments of the file to be downloaded.
According to an embodiment of the present invention, the cache module further accumulates the number of times of the file being accessed within a predetermined time and accordingly increases the cache weight for downloading the file.
According to an embodiment of the present invention, the cache module further accumulates the number of times of the file being accessed within a predetermined time, accumulates the time length from the last accessed time point of the file until present, accordingly obtains a weight adjustment value, and calculates a multiplication of the number of times and the weight adjustment value and uses the multiplication as the cache weight for downloading the file.
According to an embodiment of the present invention, the download module further determines if cache space for storing the downloaded part of the segments is sufficient. If the cache space is not sufficient, the download module obtains a cache list of a plurality of files stored in the cache space, orders the files according to the cache weight of each of the files and deletes the files in a later portion for storing the downloaded part of the segments.
To sum up, in the method and the system for managing cache files of the present invention, a file is divided into a plurality of segments, and the number of the file segments to be preserved at a local end is determined by the number of times of the file being accessed and the time of the file not being accessed. Hence, storage space at the local end may be saved effectively, and the time for downloading files from a service end may be reduced.
Several exemplary embodiments accompanied with figures are described in detail below to further describe the disclosure in details.
a) to
To save storage space in a local end apparatus as well as to prevent any file download or playback delay due to network traffic, a file stored in a service end apparatus is divided into a plurality of segments, and a part of the segments are pre-cached in the local end apparatus in the present invention so that the file data may be provided in real-time to respond to an access demand from the local end apparatus. Moreover, a cache weight of each file is calculated based on a utility rate thereof in the present invention, and the number of segments cached in the local end apparatus is accordingly adjusted so as to optimize managing cache files.
First, the download module 12 connects to a service end apparatus via a network and divides a file stored in the service end apparatus into a plurality of segments so as to download and store a part of the segments (Step S202). To be specific, the download module 12 may, for example, connect to the service end apparatus on the network via a network interface card disposed in the local end apparatus so as to download a required file from the service end apparatus. The download module 12 may determine a segment size based on factors such as a size of cache space at the local end apparatus, a size of a downloaded file and a download bandwidth and divide the file into a plurality of segments based on the segment size so that a part of the segments may be prefetched based on requests.
It is noted that, while downloading the file segments, the download module 12 also simultaneously downloads an original header file so as to obtain related information of an original file. While the downloaded segments are being stored, the download module 12 may generate a new header file according to the original header file and the downloaded part of the segments. Such new header file may, for example, record a size of the file, position and length information of the downloaded part of the segments with respect to the original file so as to allow the local end apparatus to access the file.
For example,
On the other hand, in terms of different types of files, methods for segmenting files and parsing header files are different. Also, for certain files, file contents may not be read properly until particular necessary information is read. Therefore, before downloading files, the download module 12 may, for example, first analyze a type of each of the files so that necessary information of such type of the files may be obtained. While downloading the segment data, the download module 12 also downloads the necessary information simultaneously to the local end apparatus.
For example,
Referring back to
To be specific, in an embodiment, the cache module 14 may, for example, accumulate the number of times of the file being accessed in a predetermined time and accordingly increase the cache weight of the downloaded file. In other words, as the number of times of the file being accessed increases, it represents that the file is more frequently being used. To save the bandwidth occupied for repeatedly downloading the file, the cache module 14 may increase the cache weight of the file properly so as to store more segment data in the local end apparatus.
In another embodiment, besides accumulating the number of times of the file being accessed, the cache module 14 may accumulate the time length from the last accessed time of the file being stored until present, obtain a weight adjustment value accordingly, and calculate a multiplication of the number of times of the file being accessed and the weight adjustment value and use the multiplication as a cache weight for downloading the file. With respect to different time lengths, the cache module 14 may set a plurality of weight adjustment values with nonlinear relationship there between so as to drastically decrease cache weights of files not being accessed within a long period of time for saving the cache space.
For example, if a file has not been accessed in 24 hours, a weight adjustment value thereof may be set to 100. If a file has not been stored in 2 days, 3 days, or 4 days, weight adjustment values may be reduced to 10, 9, or 8 respectively so as to reduce cache weights drastically. Moreover, if a file has not been stored over 7 days, a weight adjustment value thereof may be set to 0 so as to make a cache weight be 0 and delete all data of the file from the local end apparatus. Therefore, the cache module 14 may obtain a cache weight reflecting a utility rate of a file after a weight adjustment value is multiplied by the number of times of the file being stored, and the cache module 14 may adjust a ratio of the file downloaded by the download module 12 for saving the cache space.
Through the mechanism of prefetching a part of segment data, the local end apparatus may provide file data based on user demands without occupying excessive cache space as well as reduce time for downloading files from the service end apparatus.
For example,
The difference from the aforementioned embodiment is that, in the present embodiment, the management system 50 further includes a database module 56, which may be used for connecting to a database 60 at the local end or in the local end apparatus itself and storing the downloaded segments from the download module 52 in the database 60. On the other hand, the management system 50 further includes the management module 58, which may receive a usage demand of the file from the local end apparatus and accordingly access the part of the segments stored in the database module 56 as well as download the other segments from the service end apparatus 70 by using the download module 52 and provide the file requested by the usage demand.
It is noted that, with the limited cache space, if a new file wishes to be stored in the local end apparatus, it may result in cache space insufficiency. Meanwhile, a part of the files in the local end apparatus are required to be deleted for storing the new file. Therefore, the cache weight of each of the files calculated from the aforementioned embodiment may be adapted for ordering the files stored in the cache space so as to find out and delete less frequently accessed files for storing the new file data in the present invention. Another embodiment will be described in details hereinafter.
First, before downloading a part of segments of the file, the download module 12 may first, for example, check cache space for storing the downloaded segments in the local end apparatus (Step S602) and determine if the cache space is sufficient for storing the downloaded segment data (Step S604). If the download module 12 determines that the cache space is sufficient, the download module 12 may store the downloaded segment data directly in the cache space (Step S612).
On the other hand, if the download module 12 determines that the cache insufficient, the download module 12 may obtain a cache list of a plurality of files stored in the cache space (Step S606). Then, the download module 12 may, for example, order the files according to a cache weight of each of the files (Step S608) and delete the files in the later portion (Step S610) so as to store the downloaded segment data in the vacant cache space. The cache weight of each of the files is, for example, calculated based on factors such as the number of times of each of the files being accessed and the time length from the last accessed time point of the file being stored until present. The detailed calculation method has been disclosed in the aforementioned embodiment and may not be repeated herein.
Through the aforementioned method, the local end apparatus may flexibly adjust the arrangement of the files in the cache space so as to optimize the management of the cache files.
Moreover, in terms of the method for increasing or decreasing the segments to be downloaded, a plurality of stages are further defined in the present invention, wherein each of the stages corresponds to different number of segments so that the local end apparatus may determine the stage to use based on requirements. For example,
To sum up, in the method and the system for managing cache files of the present invention, a file is divided into a plurality of segments, and a cache weight is calculated based on the number of times of the file being accessed within a period of time and the time of the file not being accessed. Hence, when a local end apparatus is downloading the file data, it may properly adjust a ratio for downloading the file data. Hence, cache space at a local end may be saved effectively, and the time for downloading files from a service end may be reduced.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the present invention cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
102100597 | Jan 2013 | TW | national |