The present application relates to the field of storage, in particular to a method and system for processing a file read-write service, a device, and a storage medium.
For a distributed file system (object storage), since an HDFS protocol access is a stateless access (a client does not send open and close requests to a storage end as the standard posix protocol does), the distributed file system needs to open a file handle each time it receives a read-write request to implement a read-write service, and then close the file handle after completing the service. As a result, a large number of requests for opening and closing file handles are caused, which causes a large load on the system and increases the delay of each read-write IO.
Embodiments of the present application provide a method for processing a file read-write service. The method includes the following steps:
In some embodiments of the present application, the method further includes:
In some embodiments of the present application, the method further includes:
In some embodiments of the present application, the method further includes:
In some embodiments of the present application, the method further includes:
In some embodiments of the present application, moving the cache handle of the file from the first queue to the second queue further includes:
In some embodiments of the present application, the method further includes:
Based on the same inventive concept, according to another aspect of the present application, an embodiment of the present application further provides a system for processing a file read-write service. The system includes:
Based on the same inventive concept, according to still another aspect of the present application, an embodiment of the present application further provides a computer device. The computer device includes:
Based on the same inventive concept, according to yet still another aspect of the present application, an embodiment of the present application further provides a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium has a computer program stored thereon, and the computer program, when executed by a processor, performs the steps of any one of the above methods for processing the file read-write service.
In order to more clearly describe the technical solutions of the embodiments of the present application or in the prior art, drawings that are to be referred for description of the embodiments or the prior art will be briefly described hereinafter. Apparently, the drawings described hereinafter merely illustrate some embodiments of the present application, and a person of ordinary skill in the art may also derive other embodiments based on the drawings described herein without any creative effort.
In order to make the objects, technical solutions, and advantages of the present application more clear, embodiments of the present application are further described in detail below with reference to embodiments and the accompanying drawings.
It is to be noted that all expressions using “first” and “second” in the embodiments of the present application are intended to distinguish two different entities or parameters with the same name. It may be seen that “first” and “second” are merely for the convenience of expressions and should not be construed as limiting the embodiments of the present application, which will not be stated one by one in subsequent embodiments.
According to one aspect of the present application, an embodiment of the present application provides a method for processing a file read-write service. As shown in
In some embodiments of the present application, in step S1, in response to receiving the read-write service of the file, whether the cache handle of the file is present in the index container is determined based on the file serial number. The index container may be a standard template library (STL), such that, after the cache handle of the file is added into the index container, the corresponding cache handle may be searched for and determined based on the file serial number.
In some embodiments of the present application, when the read-write service of the file is received, firstly, the index container may be searched based on the file serial number (for example, the ino number of the file) to determine whether the cache handle is cached in the index container. In response to that the cache handle is not cached in the index container, it means that the handle of the file is not opened, and then corresponding handles of a distributed file system need to be opened based on the read-write service, that is, a read service opens a read handle, and a write service opens a write handle. Then, the flag and the pointer of the handle of the file and the file serial number are encapsulated to obtain the cache handle of the file, and the cache handle is saved to the index container and the first queue. Next, the read-write service may be implemented by using the opened handle. Finally, after the read-write service is completed, the cache handle is moved from the first queue to the second queue.
Thus, when the cache handle is present in the first queue, determining the cache handle is being used to perform the read-write service, and when the cache handle is present in the second queue, determining the cache handle is not used.
By means of the solutions provided in the embodiments of the present application, the index container is used for quick query mapping of files to handles, the first queue is used for handle protection, and the second queue is used for efficient detection of invalid handles, such that file handle processing pressure and frequency during reading and writing by the distributed file system are effectively reduced, and then the file read-write delay of read-write 10 is reduced.
In some embodiments of the present application, the method further includes:
When the cache handle is present in the second queue, it means that the corresponding handle is not used, such that the usage time of each cache handle in the second queue may be set, and a time threshold is set. The usage time is updated when the cache handle is moved from the first queue to the second queue. In response to that the usage time of a cache handle in the second queue has not been updated for a long time, that is, the usage time exceeds a set time threshold, the cache handle in the second queue may be removed, and the same cache handle in the index container is found based on the file serial number, and deleted. Finally, the corresponding handle in the distributed file system is closed based on the handle pointer.
In some embodiments of the present application, the method further includes:
The quantity of the cache handles in the second queue may be limited, and when the quantity of the cache handles in the second queue reaches the preset quantity, a plurality of cache handles may be deleted starting from the tail of the second queue. Similarly, the cache handles in the second queue may be removed firstly, then the same cache handles in the index container are found based on the file serial numbers, and deleted, and finally, the corresponding handles in the distributed file system are closed based on the handle pointers.
It is to be noted that when the cache handle is moved from the first queue to the second queue, the cache handle may be placed at a head of the second queue, such that the tail of the second queue is a cache handle with a relatively long time. Therefore, when the quantity of the cache handles in the second queue exceeds the threshold, the cache handles may be removed starting from the tail of the second queue.
In some embodiments of the present application, the method further includes:
When the read-write service of the file is received, the corresponding cache handle may be found in the index container based on the file serial number, and whether the handle flag in the cache handle corresponds to the read-write service needs to be determined, that is, handle flag detection is performed. Due to the read-write difference, different flags are required for IO. For write operations, rw flags are required, and for read operations, r flags are required. In response to that the cache handle does not include a required flag, a file handle needs to be reopened based on a flag required by the read-write service.
Therefore, in response to that the handle flag in the cache handle of the file corresponds to the read-write service and the cache handle of the file is in the second queue, the cache handle in the second queue is moved to the first queue, and the read-write service is processed by using the opened corresponding handle of the file.
In some embodiments of the present application, the method further includes:
In response to that the handle flag in the cache handle of the file corresponds to the read-write service and the cache handle of the file is in the first queue, it means that another thread is using the corresponding handle at this moment. Therefore, the usage count may be set, and when another thread uses the opened corresponding handle of the file for the read-write service, the usage count of the cache handle of the file may be increased. When the read-write service of a thread is completed, the usage count of the cache handle of the file may be decreased.
In some embodiments of the present application, the method further includes:
In response to that the handle flag in the cache handle of the file does not correspond to the read-write service, a handle needs to be reopened. In this case, in response to that the cache handle of the file is in the second queue, it means that no thread is using the current handle, such that the cache handle may be directly removed from the second queue and the index container. Then, the corresponding handle is closed based on the handle pointer, then the corresponding handle of the file is reopened according to the read-write service, the flag and the pointer of the corresponding handle and the file serial number are encapsulated to obtain a new cache handle of the file, and the new cache handle is saved to the first queue and the index container.
In some embodiments of the present application, moving the cache handle of the file from the first queue to the second queue further includes:
When the usage count is 0, it means that no thread is using the handle at this time, such that the corresponding cache handle may be moved to the second queue, and the usage time of the cache handle is updated.
By means of the solutions provided in the embodiments of the present application, the index container is used for quick query mapping of files to handles, the first queue is used for handle protection, and the second queue is used for efficient detection of invalid handles, such that file handle processing pressure and frequency during reading and writing by a distributed file system are effectively reduced, and then the file read-write delay of read-write 10 is reduced.
Based on the same inventive concept, according to another aspect of the present application, an embodiment of the present application further provides a system 400 for processing a file read-write service. As shown in
In some embodiments of the present application, the system further includes:
In some embodiments of the present application, the system further includes:
In some embodiments of the present application, the system further includes:
In some embodiments of the present application, the system further includes:
In some embodiments of the present application, moving the cache handle of the file from the first queue to the second queue further includes:
In some embodiments of the present application, the system further includes:
By means of the solutions provided in the embodiments of the present application, the index container is used for quick query mapping of files to handles, the first queue is used for handle protection, and the second queue is used for efficient detection of invalid handles, such that file handle processing pressure and frequency during reading and writing by a distributed file system are effectively reduced, and then the file read-write delay of read-write 10 is reduced.
Based on the same inventive concept, according to still another aspect of the present application, as shown in
In some embodiments of the present application, the following steps are further included:
In some embodiments of the present application, the following steps are further included:
In some embodiments of the present application, the following steps are further included:
In some embodiments of the present application, the following step is further included:
In some embodiments of the present application, moving the cache handle of the file from the first queue to the second queue further includes:
In some embodiments of the present application, the following steps are further included:
By means of the solutions provided in the embodiments of the present application, the index container is used for quick query mapping of files to handles, the first queue is used for handle protection, and the second queue is used for efficient detection of invalid handles, such that file handle processing pressure and frequency during reading and writing by a distributed file system are effectively reduced, and then the file read-write delay of read-write 10 is reduced.
The processor 520 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the processor 520 may also be a controller, microcontroller, microprocessor or other data processing chip, etc. The processor 520 may be implemented in at least one hardware form of Digital Signal Processing (DSP), a Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA), The processor 520 may also include a main processor and a coprocessor. The main processor is a processor configured to process data in a wake state, also known as a Central Processing Unit (CPU), and the coprocessor is a low-power processor configured to process data in a standby state. In some embodiments of the present disclosure, the processor 520 may be integrated with a Graphics Processing Unit (GPU), and the GPU is responsible for rendering and drawing the content that a display screen needs to display. In some embodiments of the present disclosure, the processor 520 may also include an Artificial intelligence (AI) processor: and the AI processor is configured to process computational operations related to machine learning.
The memory 510 may include one or more non-transitory computer-readable storage medium, which may be non-transient. The memory 510 may include a high speed Random Access Memory (RAM) and a non-volatile memory such as one or more disk storage apparatuses and a flash memory. In some embodiments of the present disclosure, the memory 510 may be an internal storage unit of the electronic device, such as a hard disk of a server. In other embodiments of the present disclosure, the memory 510 may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital Card (SD), a Flash Card, etc., equipped on the server. Further, the memory 510 may include both the internal storage unit of the computer device and the external storage device. The memory 510 may be configured not only to store application software and various data installed in the electronic device, such as a code of a program that performs a vulnerability handling method, but also to temporarily store data that has been or will be output. In some embodiments of the present disclosure, the memory 510 is configured to store at least the following computer program 511. The computer program, after being loaded and executed by processor 520, is capable of implementing the relevant steps of the method for processing a file read-write service disclosed in any of the foregoing embodiments. In addition, resources stored by the memory 510 may also include an operating system and data, etc., and a storage method may be transient storage or permanent storage. The operating system may include Windows, Unix, Linux, etc.
In some embodiments of the present disclosure, the computer device may also include a display screen, an input-output interface, a communication interface or a network interface, a power supply, and a communication bus. The display screen and the input-output interface, such as a Keyboard, are user interfaces, and the user interfaces may also include standard wired interfaces, wireless interfaces, etc. In some embodiments of the present disclosure, the display may be a Light-Emitting Diode (LED) display, a liquid crystal display, a touch liquid crystal display, an Organic Light-Emitting Diode (OLED) touch device, etc. The display, which may also be appropriately referred to as a display screen or display unit, is configured to display information processed in the electronic device and a user interface for display visualization. In some embodiments of the present disclosure, the communication interface may include wired interfaces and/or wireless interfaces, such as WI-FI interfaces, Bluetooth interfaces, etc., which are commonly configured to establish communication connections between the electronic device and other electronic devices. The communication bus may be either a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc. The bus may be classified into an address bus, a data bus, a control bus, etc.
Based on the same inventive concept, according to yet still another aspect of the present application, as shown in
In some embodiments of the present application, the following steps are further included:
In some embodiments of the present application, the following steps are further included:
In some embodiments of the present application, the following steps are further included:
In some embodiments of the present application, the following step is further included:
In some embodiments of the present application, moving the cache handle of the file from the first queue to a second queue further includes:
In some embodiments of the present application, the following steps are further included:
By means of the solutions provided in the embodiments of the present application, the index container is used for quick query mapping of files to handles, the first queue is used for handle protection, and the second queue is used for efficient detection of invalid handles, such that file handle processing pressure and frequency during reading and writing by a distributed file system are effectively reduced, and then the file read-write delay of read-write 10 is reduced.
Finally, it is to be noted that a person of ordinary skill in the art may appreciate that all or part of the flow of the above method embodiment may be implemented by a computer program instructing associated hardware. The program may be stored on a non-transitory computer-readable storage medium, and, when executed, may include the flow of the above method embodiments.
In addition, it should be understood that the non-transitory computer-readable storage medium (for example, a memory) herein may be a volatile memory or a non-volatile memory, or may include both a volatile memory and a non-volatile memory.
The non-transitory computer-readable storage medium includes: various media capable of storing program codes such as a U disk, a mobile hard disk, a Read-Only Memory (ROM), an RAM, an electrically erasable programmable ROM, a register, a hard disk, a multimedia card, a card type memory (such as an SD or DX memory, etc.), a magnetic memory, a removable disk, a CD-ROM, a magnetic disk, or an optical disk.
It will also be appreciated by a person skilled in the art that the various exemplary logic blocks, components, circuits, and algorithmic steps described in conjunction with the disclosure herein may be implemented as electronic hardware, computer software, or a combination of both. In order to clearly illustrate such interchangeability of hardware and software, a general description of the various illustrative components, blocks, components, circuits, and steps has been provided with respect to their functionality. Whether such functionality is implemented as software or as hardware depends on the specific application and the design constraints imposed on the overall system. The functionality may be implemented in various ways by a person skilled in the art for each specific application, but such implementation decisions should not be construed to cause a departure from the scope of the disclosure of the embodiments of the present application.
The above are exemplary embodiments of the present application, but it should be noted that various changes and modifications may be made without deviating from the scope of disclosure of the embodiments of the present application as defined by the appended claims. The functions, steps, and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements according to the embodiments of the present application may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that the term “and/or” as used herein refers to any or all possible combinations including one or more associated listed items.
The serial number of the embodiments of the present application is disclosed for description merely and does not represent the merits of the embodiments.
A person of ordinary skill in the art may appreciate that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be completed by a program instructing relevant hardware. The program may be stored in a non-transitory computer-readable storage medium which may be a read-only memory, a magnetic disk or a compact disk, etc.
A person of ordinary skill in the art may appreciate that the above discussion of any embodiments is intended to be exemplary only, and is not intended to suggest that the scope (including the claims) of the embodiments of the present application is limited to these examples; and combinations of features in the above embodiments or in different embodiments are also possible within the framework of the embodiments of the present application, and many other variations of different aspects of the embodiments of the present application as described above are possible, which are not provided in detail for the sake of clarity. Therefore, any omission, modification, equivalent substitution, improvement, etc. made within the spirit and principles of the embodiments of the present application shall fall within the protection scope of the embodiments of the present application.
Number | Date | Country | Kind |
---|---|---|---|
202110853962.1 | Jul 2021 | CN | national |
This application is a National Stage Application of International Application No. PCT/CN2021/121898, filed 29 Sep. 2021, which claims the benefit of Serial No. 202110853962.1 filed on Jul. 28, 2021 in China, and which applications are incorporated herein by reference. To the extent appropriate, a claim of priority is made to each of the above disclosed applications.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2021/121898 | 9/29/2021 | WO |