This application claims the priority benefit of Taiwan application serial no. 112142745, filed on Nov. 7, 2023. The entirety of the above-mentioned patent applications are hereby incorporated by reference herein and made a part of this specification.
The present disclosure relates to a data reading technology and in particular to a data system and a data reading method.
Generally data reading refers to reading a file from a storage device (e.g., a hard disk) by an operating system (OS). However, conventional data reading methods take relatively long time for data transfer. In addition, when a plurality of data reading requests correspond to data of a same file, a conventional system reads the data of this file repeatedly from the storage device, therefore the time spent on data reading is increased.
The present disclosure provides a data system and a data reading method allowing data reading operations of high efficiency.
The data system according to the present disclosure comprises a processor, a memory, and a storage device. The processor comprises a file system in user space. The memory comprises a memory buffer and is electrically coupled to the processor. The storage device is electrically coupled to the processor and the memory. The file system in user space receives a reading request sent by an application end; and the file system in user space reads, based on the read request, prefetch data of a target file which is pre-stored in at least one buffer block of the memory buffer. In response to the file system in user space determining that a total reading amount for a last buffer block with stored data from the at least one buffer block being currently read exceeds a preset total amount, the file system in user space prefetches a next batch of prefetch data of the target file from the storage device and stores it in another buffer block.
The data reading method according to the present disclosure comprises: receiving, by a file system in user space, a reading request sent by an application end; reading, based on the reading request and by the file system in user space, prefetch data of a target file pre-stored in at least one buffer block of a memory buffer; and in response to the file system in user space determining that a total reading amount for a last buffer block with stored data from the at least one buffer block being currently read exceeds a preset total amount, prefetching, by the file system in user space, a next batch of prefetch data of the target file from a storage device, and storing it in another buffer block.
On this basis, by the data system and the data reading method of the present disclosure, it is possible to establish buffer blocks in a memory buffer by a file system in user space based on a reading request for prefetching data of a target file, thereby effectively improving the data reading speed.
To make the above features and advantages of the present disclosure obvious and understandable, exemplary embodiments are described subsequently in details with reference to the drawings.
Numerical reference in the drawings are briefly explained below:
100: data system; 110: processor; 111: file system in user space; 120: memory; 121: memory buffer; 130: storage device; 200: application end; 301, 401, 601, 701, 801, 901: range for reading; 310 to 330, 410 to 440, 510 to 580, 610 to 640, 710 to 730, 810 to 830, 910 to 940: buffer blocks; S210 to S230: steps.
In order to make the contents of the present disclosure easy to understand, embodiments are provided below as practical examples of implementation of the present disclosure. In addition, as far as possible, the elements/components/steps indicated by the same numerical references in the drawings and the embodiments represent the same or similar components.
In this embodiment, the processor 110 may receive the reading request from the application end 200, and the file system in user space 111 may set the memory 120 based on the reading request to build in the memory buffer 121 at least one buffer block corresponding to a target file of the reading request. Correspondingly, data of the target file (a portion corresponding to the current reading request) stored in the storage device 130 can be read and stored in the at least one buffer block, allowing the file system in user space 111 to read data corresponding to the reading request from the at least one buffer block.
For instance, if the data amount of the reading request is less than 512 KB, the file system in user space 111 may build a first buffer block (with buffer size of for example 512 KB) in the memory buffer 121 based on a read offset, and read the corresponding data from the storage device 130 and store it in the first buffer block.
For another instance, if the data amount of the reading request is less than 1.5 MB, the file system in user space 111 may build a first buffer block (with buffer size of for example 512 KB) and a second buffer block (with buffer size of for example 1 MB) in the memory buffer 121 based on a read offset, and read sequential data from the storage device 130 and store it in the first buffer block and the second buffer block. After the reading of data is finished, the file system in user space 111 may return the data in the buffer blocks to the application end 200 with given space locations of the memory buffer 121.
In this embodiment, buffer sizes of at least one initially built portion of multiple buffer blocks built by the file system in user space 111 based on the target file may increase sequentially. In an embodiment, the buffer sizes may increase sequentially for example by an arithmetic progression (i.e., 512 KB, 1 MB, 1.5 MB, 2 MB, 2.5 MB, 3 MB, 3.5 MB . . . , etc.). In this embodiment, the number of buffer blocks built by the file system in user space 111 based on the target file may have an upper limit, e.g., 7. Furthermore, in an embodiment, the total buffer size for the buffer blocks built by the file system in user space 111 based on the target file may also have an upper limit buffer size, e.g., 15 MB. The maximum buffer size of the buffer block may be, for example, 3.5 MB.
Furthermore, in an embodiment, in response to the file system in user space 111 receiving the first-time reading request sent by the application end 200, the file system in user space 111 may pre-store file data of an initial part of the target file in the first buffer block of the memory buffer 121 based on the first-time reading request, and pre-store (prefetch) file data of a final part of the target file in an additional buffer block of the memory buffer 121. As such, during the reading of some specific target files (e.g., video files), in response to a case where the file system in user space 111 reads file data of the final part of the target file unexpectedly during the data reading, the method may still maintain the data reading efficiency (without the need of rebuilding the buffer blocks). In an embodiment, the additional buffer block of the memory buffer 121 will not be recycled by the file system in user space 111 due to the number of buffer blocks reaching the upper limit (described subsequently), in response to a case where the file data of the final part of the target file needs to be read repeatedly during the data reading, such as in cases where metadata at the beginning and the end of a video file may need to be read repeatedly during the execution of a video playing or editing software.
In step S230, in response to the file system in user space 111 determining that a total reading amount for the last buffer block with stored data from the at least one buffer block being currently read exceeds a preset total amount, the file system in user space 111 may prefetch a next batch of prefetched data of the target file from a storage device 130 and store it in another buffer block.
For instance, referring to
For another instance, referring to
In response to the file system in user space 111 having prefetched the next batch of prefetched data of the target file from the storage device 130, and the number of buffer blocks reaching the upper limit, the file system in user space 111 may add a new buffer block 580 in the memory buffer 121, and start to recycle from the existing first buffer block (e.g., recycle the buffer blocks 510 to 540). To this end, assuming that the upper limit for the number of buffer blocks built by the file system in user space 111 based on the target file is 7, if the file system in user space 111 is about to add the next buffer block 580, since the maximum buffer size is 3.5 MB for example, the buffer size of each of the newly added buffer block 580 and the subsequently added buffer blocks may be kept to 3.5 MB as well. Meanwhile, the file system in user space 111 will start to recycle in sequence from the first buffer block 510 initially built. Furthermore, if the upper limit buffer size for the total buffer size is 15 MB, eventually the number of all the buffer blocks will be kept to 4 (each having a buffer size of 3.5 MB).
However, since the file system in user space 111 may receive another reading request sent by another application end, and the file system in user space 111 may use at least one of the buffer blocks 510 to 570 in the same manner based on another reading request, the buffer blocks 510 to 570 may be designed with a counter respectively to calculate a count value. The count value for a buffer block may be determined based on the number of reading operations on buffer block being currently read. In other words, if data in the buffer block 510 is currently read based on another reading request or by another application end, the count value may be 1. If data in the buffer block 510 is currently read based on three reading requests or by three another application ends, the count value may be 3. Accordingly, in response to a count value for a buffer block to be recycled being 0, the file system in user space 111 recycles this buffer block to be recycled. Otherwise, in response to a count value for the buffer block to be recycled not being 0, the file system in user space 111 suspend the recycling of this buffer block to be recycled.
In sum, by the data system and the data reading method of the present disclosure, buffer blocks can be built by the file system in user space to build in the memory buffer based on a reading request; the buffer blocks can be added and recycled dynamically based on the reading condition, and multiple reading requests can be supported to use same buffer blocks. Therefore, the data system and the data reading method of the present disclosure can effectively prefetch data of a target file to the memory buffer, so that the file system in user space can rapidly read the corresponding sequential data to the application end based on the reading request.
The above descriptions are only preferred embodiments of the present disclosure, but are not intended to limit the scope of the present disclosure. Anyone skilled in the art can make further improvements and modifications on this basis without departing from the spirit and scope of the present disclosure. Therefore, the scope of protection of the present disclosure shall be interpreted based on the scope defined by the claims of the present application.
Number | Date | Country | Kind |
---|---|---|---|
112142745 | Nov 2023 | TW | national |