The present application relates to, but is not limited to, a field of storage technology, and in particular, to a flash file system and a data management method thereof.
A flash memory is an electrically erasable programmable memory which, compared with conventional disk media, has the characteristics of high read/write bandwidth, low access latency, low power consumption and high stability. Currently, the flash memory is more and more popular in data centers, personal computers, and mobile devices. The flash memory conducts read and write operations in units of pages, and a page needs to be erased before being rewritten by the flash memory. The erasion by the flash memory is conducted in units of blocks, where a flash block contains hundreds of flash pages. Each unit of the flash memory can withstand a limited number of erase operations, i.e., each flash unit has a limited lifetime.
In a file system, a page cache is used for caching the latest manipulated data to speed up the read and write process. When the data needs to be read, it is first determined in the page cache whether this content resides in the memory. If so, the data is directly returned; if not, then the data would be read from the flash memory. When a write operation is required, the data is no longer directly written into the device, but instead written into a page of the page cache which is later marked as a dirty page, and then return directly. The dirty page of the page cache is written into a flash memory device when a user issues a synchronous call or an operating system background thread initiates a synchronous operation.
The following is a summary of the subject matter described in detail in the disclosure. This summary is not intended to limit the scope of claims.
Embodiments of the disclosure provide a flash file system and a data management method thereof that can avoid unnecessary data writing.
Technical solutions in the embodiments of the present disclosure are implemented as follows.
In an embodiment of the present disclosure, there is provided a flash file system, including: a creation module, a marking module, a synchronization module and a backfilling module, wherein the creation module is configured to divide a flash memory into a file system region and a flash buffer region when a file system is created; the marking module is configured to mark written data as dirty data in a memory buffer when the data are written and an amount of the written data is less than or equal to a preset marking threshold, wherein the marking threshold being used to indicate an amount of data that are written into the memory buffer and need to be marked according to data granularity; the synchronization module is configured to write, when data synchronization is required, the dirty data into the flash buffer region after merging all the dirty data or the dirty data of a file to be synchronized in the memory buffer, and notify the backfilling module when the flash buffer region is full; and the backfilling module is configured to read the dirty data in the flash buffer region when a notification is received from the synchronization module, write the dirty data into the file system region, and erase the flash buffer region.
In an embodiment, the flash buffer region includes a first flash buffer region and a second flash buffer region, wherein the synchronization module is configured to: write the dirty data into the first flash buffer region after merging all the dirty data in the memory buffer or the dirty data of the file to be synchronized in the memory buffer when data synchronization is required; send a first notification to the backfilling module when the first flash buffer region is full, and write the dirty data into the second flash buffer region after merging all the dirty data in the memory buffer or the dirty data of the file to be synchronized in the memory buffer when data synchronization is required; and send a second notification to the backfilling module when the second flash buffer region is full, and write the dirty data into the first flash buffer region after merging all the dirty data in the memory buffer or the dirty data of the file to be synchronized in the memory buffer when data synchronization is required; the backfilling module is configured to: read the dirty data in the first flash buffer region when the first notification is received from the synchronization module, write the dirty data into the file system region, and erase the first flash buffer region; and read the dirty data in the second flash buffer region when the second notification is received from the synchronization module, write the dirty data into the file system region, and erase the second flash buffer region.
In an embodiment, the marking module is configured to: encapsulate, when written data is present and an amount of the written data is less than or equal to the marking threshold, an inode number, and a page number of a data segment, a page offset, a length of the data segment and data of the data segment of a file corresponding to the written data as records, and add the records to a preset dirty data list; and increase a reference count of a memory buffer page corresponding to the written data by one.
In an embodiment, the synchronization module is configured to: search for all records of the file corresponding to the written data according to the inode number of the file, request a new memory page, sequentially copy contents of a plurality of records to the new memory page, and sequentially write the contents in the new memory page into the flash buffer region.
In an embodiment, the system further includes: a recovery module configured to detect whether dirty data is present in the flash buffer region when the flash file system is restarted; and read all the dirty data in the flash buffer region if dirty data is present in the flash buffer region, and update content of the memory buffer according to each piece of the dirty data.
In an embodiment of the present disclosure, there is further provided a data management method of a flash file system, including: dividing a flash memory into a file system region and a flash buffer region when a file system is created; marking written data as dirty data in a memory buffer when the data are written and an amount of the written data is less than or equal to a preset marking threshold, wherein the marking threshold being used to indicate an amount of data that are written into the memory buffer and need to be marked according to data granularity; writing, when data synchronization is required, the dirty data into the flash buffer region after merging all the dirty data or the dirty data of a file to be synchronized in the memory buffer; and reading the dirty data in the flash buffer region when the flash buffer region is full, writing the dirty data into the file system region, and erasing the flash buffer region.
In an embodiment, the flash buffer region includes a first flash buffer region and a second flash buffer region, wherein the dirty data is written into the first flash buffer region after merging all the dirty data in the memory buffer or the dirty data of the file to be synchronized in the memory buffer when data synchronization is required; the second flash buffer region is configured, when the first flash buffer region is full, as a current buffer used for writing data when data synchronization is required, while the dirty data in the first flash buffer region is read and written into the file system region, and the first flash buffer region is erased; and the first flash buffer region is configured, when the second flash buffer region is full, as the current buffer used for writing data when data synchronization is required, while the dirty data in the second flash buffer region is read and written into the file system region, and the second flash buffer region is erased.
In an embodiment, marking the written data as dirty data in the memory buffer includes: encapsulating an inode number, and a page number of a data segment, a page offset, a length of the data segment and data of the data segment of a file corresponding to the written data as records, and adding the records to a preset dirty data list; and increasing a reference count of a memory buffer page corresponding to the written data by one.
In an embodiment, writing the dirty data in the dirty data list into the flash buffer region after merging the dirty data includes: searching for all records of the file corresponding to the written data according to the inode number of the file, requesting a new memory page, sequentially copying contents of a plurality of records to the new memory page, and sequentially writing the contents in the new memory page into the flash buffer region.
In an embodiment, the data management method further includes: detecting whether dirty data is present in the flash buffer region when the flash file system is restarted; and reading all the dirty data in the flash buffer region if dirty data is present in the flash buffer region, and updating content of the memory buffer according to each piece of the dirty data.
The flash file system and the data management method thereof according to the embodiments of the present disclosure avoid unnecessary data writing by marking the dirty data and writing the dirty data into the flash memory after merging the dirty data, thereby reducing latency of the synchronous operations and improving lifetime of the flash memory.
Further, by providing the first and the second flash buffer regions, when the system backfills one of the flash buffer regions, the other acts as the current buffer into which the synchronous operations during this period are sequentially written, thereby avoiding a case where the whole system is stopped to wait due to the backfill. Alternative use of the two buffer regions ensures normal operation of the system.
Other aspects will become apparent upon reading and understanding the drawings and the detailed description.
It should be understood that in the description of embodiments of the present disclosure, orientations or positions referred by terms “central”, “longitudinal”, “lateral”, “upper”, “lower”, “front”, “back”, “left”, “right”. “vertical”, “horizontal”, “top”, “bottom”, “inside”, “outside” and the like are based on the orientations or positions shown in the drawings, and are used merely for facilitating description of the embodiments of the disclosure and simplifying the description, instead of indicting or implying that the device or component referred to has a particular orientation or is configured and operates at a particular orientation, and thus cannot be interpreted as limitations to the present disclosure. Moreover, terms “first”. “second”, and the like are used for the purpose of illustration only and cannot be construed as indicating or implying a relative importance.
As used in the description of the embodiments of the disclosure, it is to be noted that terms “install”, “connected to”, and “connect” are to be interpreted broadly, and may refer to, for example, a fixed connection or a removable connection or an integral connection; or may refer to a mechanical connection or an electrical connection; or may refer to a direct connection, an indirect connection via an intermedium, or a communication between inner segments of two elements, unless explicitly stated or defined otherwise. Those ordinary skilled in the art may understand the specific meanings of the above terms in the embodiments of present disclosure according to specific context.
These and other aspects of the embodiments of disclosure will become apparent with reference to the following description and drawings. In the description and drawings, some particular implementations of the embodiments of the disclosure are disclosed to show some manners for implementing principles of the present disclosure. However, it should be understood that the embodiments of the present disclosure are not limited thereto. Rather, the embodiments of the present disclosure are intended to cover all variations, modifications and equivalents within the scope of the following claims.
Since a write operation may mark an entire page as a dirty page even if this write operation involves only a small portion of the page, the entire page is written into the flash memory device when a synchronous operation is performed. As a result, an amount of written data is greatly increased, which not only prolongs latency of the synchronous operation and reduces performance of the system, but also increases wear of the flash memory device and greatly reduces its lifetime.
On this basis, as shown in
The creation module 11 is configured to divide a flash memory into a file system region and a flash buffer region when a file system is created.
The marking module 12 is configured to mark written data as dirty data in a memory buffer when the data are written and an amount of the written data is less than or equal to a preset marking threshold, wherein the marking threshold being used to indicate an amount of data that are written into the memory buffer and need to be marked according to data granularity.
The synchronization module 13 is configured to write, when data synchronization is required, the dirty data into the flash buffer region after merging all the dirty data or the dirty data of a file to be synchronized in the memory buffer, and notify the backfilling module when the flash buffer region is full.
The backfilling module 14 is configured to read the dirty data in the flash buffer region when a notification is received from the synchronization module, write the dirty data into the file system region, and erase the flash buffer region.
It is to be noted that the dirty data in embodiments of the present disclosure refers to data in the memory buffer that has been modified by a process. The file system uses pages as units of the memory buffer, and a page is marked as a dirty page when a process modifies the data in the page of the memory buffer. In an embodiment of the present disclosure, the written data is marked as dirty data in granularity of bytes, thereby avoiding unnecessary data writing.
In an embodiment, a size of the flash buffer region is specified by a user or preset by the system.
In an embodiment, if the size of the flash buffer region is specified by a user, a separate region is divided from the flash memory device when the file system is created and mounted as a buffer region according to a size parameter of the buffer region transferred by the user. When the file system performs a physical space allocation, none of the allocated space is within the flash buffer region. Therefore, the flash buffer region is not indexed by the file system.
In an embodiment, the flash buffer region includes a first flash buffer region and a second flash buffer region.
The synchronization module is configured to: write the dirty data into the first flash buffer region after merging all the dirty data in the memory buffer or the dirty data of a file to be synchronized in the memory buffer when data synchronization is required; send a first notification to the backfilling module when the first flash buffer region is full, and write the dirty data into the second flash buffer region after merging all the dirty data in the memory buffer or the dirty data of the file to be synchronized in the memory buffer when data synchronization is required; and send a second notification to the backfilling module when the second flash buffer region is full, and write the dirty data into the first flash buffer region after merging all the dirty data in the memory buffer or the dirty data of the file to be synchronized in the memory buffer when data synchronization is required.
The backfilling module is configured to: read the dirty data in the first flash buffer region when the first notification is received from the synchronization module, write the dirty data into the file system region, and erase the first flash buffer region; and read the dirty data in the second flash buffer region when the second notification is received from the synchronization module, write the dirty data into the file system region, and erase the second flash buffer region.
By providing two flash buffer regions, when the system backfills one of the flash buffer regions, the other acts as the current buffer into which the synchronous operations during this period are sequentially written, thereby avoiding a case where the whole system is stopped to wait due to the backfill. Alternative use of the two buffer regions ensures normal operation of the system.
In an embodiment, the memory buffer is a page cache.
In an embodiment, the marking module 12 is further configured to perform processing according to a current input/output (IO) path when written data is present and the amount of the written data is greater than the preset marking threshold.
In an embodiment of the disclosure, performing processing according to the current input/output (IO) path includes: writing the written data into a page cache, marking a page corresponding to the data as a dirty page, and return.
In an embodiment, a size of the marking threshold may be set according to a specific accelerated reading process. For example, the size of the marking threshold may be set to be half of a size of the memory page (4096*50%=2048 bytes), or 80% of the size of the memory page (4096*80%=3276.8 bytes).
In an embodiment, the marking module 12 is configured to: encapsulate an inode number, and a page number of a data segment, a page offset, a length of the data segment and data of the data segment of a file corresponding to the written data as records, i.e., in a form of <inode number, page number, page offset, length, data>, and add the records to a preset dirty data list when written data is present and an amount of the written data is less than or equal to the preset marking threshold; and increase a reference count of a corresponding page cache page by one.
It is to be noted that the marking module 12 of the embodiment of the present disclosure may mark dirty data using a preset dirty data list, or using other methods. The dirty data list may be in a form of any data structure, such as an array, a tree list, a linked list, or the like.
It is to be noted that when the marking module 12 adds the written data into the dirty data list, the corresponding page cache page is not marked as a dirty page. Instead, the reference count of the corresponding page cache page is compulsively increased by one, so that the written data in the page cache is not written into the flash memory device, thereby compulsively saving this portion of the page cache page for fast reading.
In an embodiment, the data of the data segment in the records may be specific data of the data segment, or may be a data pointer to a corresponding page of the page cache.
In an embodiment, the marking module 12 uses a radix_tree and a linked list to organize and manage all records of the same file. The radix_tree is intended for easy retrieval, while the linked list is intended to facilitate traversal. As a storage method of a Linux file system, the radix_tree is a less common data structure. The tree structure mainly contains three data pointers: a root data pointer: pointing to a root node of the tree; a free data pointer: pointing to a free node linked list; and a start data pointer; pointing to a free memory block. Each node in use is connected to each other using parent, left, and right data pointers, while the free nodes are connected to a linked list by the right data pointer. An inode is a data structure used in many Unix-like file systems. Each inode saves meta-information data for a file system object in the file system, but does not contain any data or file name.
As shown in
Upon receiving a write request, the marking module 12 is configured to retrieve in the radix_tree as shown in
new offset=min(old offset,current offset)
new length=max(old offset+old length,current offset+current length)−new offset
where new offset indicates a page offset of the new record, old offset indicates a page offset of the original record, current offset indicates a page offset of the current write operation, new length indicates a length of the data segment of the new record, old length indicates a length of the data segment of the original record, and current length indicates a length of the data segment of the current write operation. One or more item values of the new record obtained based on the above conditions are inserted into the radix_tree and the linked list. The memory page in the page cache is then updated, but the page is no longer marked as dirty by the system to prevent the file system from writing the entire page of the memory page into the flash memory device. Since the page is no longer marked as dirty, there is a danger of being recycled by the file system at any time. In order to maintain efficient reading and consistency of data, the reference count of the memory page is compulsively increased by one, thereby compulsively ensuring that it will not be recycled. By this way, subsequent read operations still read through the page in the page cache.
In an embodiment, the synchronization module 13 is configured to: search for all records of the file corresponding to the written data according to the inode number of the file, request a new memory page, sequentially copy contents of a plurality of records to the new memory page, and sequentially write the contents in the new memory page into the flash buffer region.
In an embodiment, as shown in
In an embodiment, when the content of the new memory page written into the flash buffer region is less than one page or does not have a size of an integer multiple of the memory page, meaningless data is filled so that the content of the new memory page takes an entire page or has a size of an integer multiple of the memory page.
In an embodiment, referring to
In an embodiment, referring to
When an unexpected event such as a sudden power failure occurs, a system failure recovery is required. In an embodiment, as shown in
In an embodiment of the present disclosure, as shown in
At step S801, a flash memory is divided into a file system region and a flash buffer region when a file system is created.
At step S802, when the data are written and an amount of the written data is less than or equal to a preset marking threshold, written data is marked as dirty data in a memory buffer, wherein the marking threshold being used to indicate an amount of data that are written into the memory buffer and need to be marked according to data granularity.
At step S803, after merging all the dirty data or the dirty data of a file to be synchronized in the memory buffer when data synchronization is required, the dirty data is written into the flash buffer region.
At step S804, when the flash buffer region is full, the dirty data in the flash buffer region is read, the dirty data is written into the file system region, and the flash buffer region is erased.
It is to be noted that the dirty data in embodiments of the present disclosure refers to data in the memory buffer that has been modified by a process. The file system uses pages as units of the memory buffer, and a page is marked as a dirty page when a process modifies the data in the page of the memory buffer. In an embodiment of the present disclosure, the written data is marked as dirty data in granularity of bytes, thereby avoiding unnecessary data writing.
In an embodiment, a size of the flash buffer region is specified by a user or preset by the system.
In an embodiment, if the size of the flash buffer region is specified by a user, a separate region is divided from the flash memory device when the file system is created and mounted as a buffer region according to a size parameter of the buffer region transferred by the user. When the file system performs a physical space allocation, none of the allocated space is within the flash buffer region. Therefore, the flash buffer region is not indexed by the file system.
In an embodiment, the flash buffer region includes a first flash buffer region and a second flash buffer region.
The dirty data is written into the first flash buffer region after merging all the dirty data in the memory buffer or the dirty data of the file to be synchronized in the memory buffer when data synchronization is required.
The second flash buffer region is configured, when the first flash buffer region is full, as a current buffer used for writing data when data synchronization is required, while the dirty data in the first flash buffer region is read and written into the file system region, and the first flash buffer region is erased.
The first flash buffer region is configured, when the second flash buffer region is full, as the current buffer used for writing data when data synchronization is required, while the dirty data in the second flash buffer region is read and written into the file system region, and the second flash buffer region is erased. By providing two flash buffer regions, when the system backfills one of the flash buffer regions, the other acts as the current buffer into which the synchronous operations during this period are sequentially written, thereby avoiding a case where the whole system is stopped to wait due to the backfill. Alternative use of the two buffer regions ensures normal operation of the system.
In an embodiment, the data management method further includes performing processing according to a current input/output (IO) path when written data is present and the amount of the written data is greater than the marking threshold.
In an embodiment of the disclosure, performing processing according to the current input/output (IO) path includes: writing the written data into a page cache, marking a page corresponding to the data as a dirty page, and return.
In an embodiment, the memory buffer is a page cache.
In an embodiment, a size of the marking threshold may be set according to a specific accelerated reading process. For example, the size of the marking threshold may be set to be half of a size of the memory page (4096*50%=2048 bytes), or 80% of the size of the memory page (4096*80%=3276.8 bytes).
In an embodiment, marking the written data as dirty data in the memory buffer includes: encapsulating an an inode number, and a page number of a data segment, a page offset, a length of the data segment and data of the data segment of a file corresponding to the written data as records, i.e., in a form of <inode number, page number, page offset, length, data>, and adding the records to a preset dirty data list; and increasing a reference count of a corresponding page cache page by one.
It is to be noted that the embodiment of the present disclosure may mark dirty data using a preset dirty data list, or using other methods. The dirty data list may be in a form of any data structure, such as an array, a tree list, a linked list, or the like.
It is to be noted that in the data management method of the embodiment of the present disclosure, when the written data is added into the dirty data list, the corresponding page cache page is not marked as a dirty page. Instead, the reference count of the corresponding page cache page is compulsively increased by one so that the written data in the page cache is not written into the flash memory device, thereby compulsively saving this portion of the page cache page for fast read.
In an embodiment, the data of the data segment in the record may be specific data of the data segment, or may be a data pointer to a corresponding page of the page cache.
In an embodiment, the data management method uses a radix_tree and a linked list to organize and manage all records of the same file. The radix_tree is intended for easy retrieval, while the linked list is intended to facilitate traversal. As a storage method of a Linux file system, the radix_tree is a less common data structure. The tree structure mainly contains three data pointers: a root data pointer: pointing to a root node of the tree; a free data pointer: pointing to a free node linked list; and a start data pointer: pointing to a free memory block. Each node in use is connected to each other using parent, left, and right data pointers, while the free nodes are connected to a linked list by the right data pointer. An inode is a data structure used in many Unix-like file systems. Each inode saves meta-information data for a file system object in the file system, but does not contain any data or file name.
As shown in
Each record contains 5 elements: an inode number, a page number, a page offset, a length of the data segment, and a data pointer to a corresponding page of the page cache, i.e., in the form of <inode number, page number, offset, length, data pointer>. To facilitate traversal, all records of the same file are linked by a linked list. As shown in
Upon data writing, a retrieve in the radix_tree as shown in
new offset=min(old offset,current offset)
new length=max(old offset+old length,current offset+current length)−new offset
where new offset indicates a page offset of the new record, old offset indicates a page offset of the original record, current offset indicates a page offset of the current write operation, new length indicates a length of the data segment of the new record, old length indicates a length of the data segment of the original record, and current length indicates a length of the data segment of the current write operation. One or more item values of the new record obtained based on the above conditions are inserted into the radix_tree and the linked list. The memory page in the page cache is then updated, but the page is no longer marked as dirty by the system to prevent the file system from writing the entire page of the memory page into the flash memory device. Since the page is no longer marked as dirty, there is a danger of being recycled by the file system at any time. In order to maintain efficient reading and consistency of data, the reference count of the memory page is compulsively increased by one, thereby compulsively ensuring that it will not be recycled. By this way, subsequent read operations still read through the page in the page cache.
In an embodiment, writing the dirty data of the file to be synchronized into the flash buffer region after merging the dirty data includes: searching for all records of the file corresponding to the written data according to the inode number of the file, requesting a new memory page, sequentially copying contents of a plurality of records to the new memory page, and sequentially writing the contents in the new memory page into the flash buffer region.
In an embodiment, as shown in
In an embodiment, when the content of the new memory page written into the flash buffer region is less than one page or does not have a size of an integer multiple of the memory page, meaningless data is filled so that the content of the new memory page takes an entire page or has a size of an integer multiple of the memory page.
In an embodiment, when the current buffer region is full, as shown in
In an embodiment, the data management method further includes detecting whether dirty data is present in the flash buffer region when the flash file system is restarted; and reading all the dirty data in the flash buffer region if dirty data is present in the flash buffer region, and updating content of the memory buffer according to each piece of the dirty data.
When an unexpected event such as a sudden power failure occurs, a system failure recovery is required. In an embodiment, as shown in
The flash file system and the data management method thereof according to the embodiments of the present disclosure avoid unnecessary data writing by marking the dirty data and writing the dirty data into the flash memory after merging the dirty data, thereby reducing latency of the synchronous operations and improving lifetime of the flash memory.
In an embodiment, by providing the first and the second flash buffer regions, when the system backfills one of the flash buffer regions, the other acts as the current buffer into which the synchronous operations during this period are sequentially written, thereby avoiding a case where the whole system is stopped to wait due to the backfill. Alternative use of the two buffer regions ensures normal operation of the system.
In an embodiment of the present disclosure, there is further provided a computer readable storage medium storing computer executable instructions for implementing, when executed by a processor, the method of the embodiment as described above.
Those of ordinary skill in the art will appreciate that all or some steps of the above described method, functional modules/units in the system and apparatus may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical units; for example, a physical component may have multiple functions, or a function or step may be performed cooperatively by several physical components. Some or all components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or implemented as hardware, or implemented as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on a computer readable medium which may include a computer storage medium (or non-transitory medium) and communication medium (or transitory medium). As is well known to those of ordinary skill the art, the term computer storage medium includes volatile and nonvolatile, removable and non-removable medium implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules or other data. A computer storage medium includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical disc storage, magnetic cartridge, magnetic tape, magnetic disk storage or other magnetic storage devices, or may be any other medium used for storing the desired information and accessible by a computer. Moreover, it is well known to those skilled in the art that communication medium typically includes a computer readable instruction, a data structure, a program module, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery medium.
The descriptions above are merely optional embodiments of the present disclosure, which are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various changes and variations. Any amendments, equivalent substitutions, improvements, etc. within the principle of the disclosure are all included in the scope of the protection of the disclosure. One of ordinary skill in the art would appreciate that all or part of the steps described above may be implemented by a program stored in a computer readable storage medium for instructing the associated hardware, such as a read-only memory, a magnetic or optical disk, and the like. In an embodiment, all or part of the steps in the above embodiments may also be implemented by one or more integrated circuits. Accordingly, respective modules/units in the above embodiments may be implemented in the form of hardware, or in the form of a software function module. The present disclosure is not limited to any particular combination form of hardware and software.
The embodiments of the present disclosure avoid unnecessary data writing, thereby reducing latency of the synchronous operations and improving lifetime of the flash memory. Further, alternative use of the two buffer regions ensures normal operation of the system.
Number | Date | Country | Kind |
---|---|---|---|
201710066027.4 | Feb 2017 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2018/075376 | 2/6/2018 | WO | 00 |