This application claims priority and benefit to Chinese Application No. 201811043185.9, filed on Sep. 7, 2018, the entire content of which is incorporated herein by reference.
Embodiments of the present disclosure mainly relate to a field of computer technology, and more particular to a sample processing method, a sample processing device, a related apparatus and a computer readable storage medium.
With popularity of artificial intelligence, a technology of machine learning becomes more and more widely used. The machine learning is to use a statistical technique to provide a computer system with an ability of data “learning” (e.g., an ability of incrementally improving the performance of specific tasks). Supervised learning is a type of machine learning task that is based on a sample input-output pair to learn a function for mapping an input to an output. In the supervised learning, is the function may be inferred from annotated training data (i.e., annotated samples) consisting of a set of training data.
According to exemplary embodiments of the present disclosure, there is provided a sample processing method.
There is provided a sample processing method. The method includes determining a feature representation of samples included in a sample set, each of the samples having a pre-annotated category; performing a clustering on the samples to obtain a cluster including one or more of the samples based on the feature representation; determining a purity of the cluster based on categories of samples included in the cluster, the purity indicating a chaotic degree of the categories of samples included in the cluster; and determining filtered samples from the samples included in the cluster based on the purity.
There is provided an electronic device. The electronic device includes one or more processors; and a storage device, configured to store one or more programs that when executed by the one or more processors cause the one or more processor to execute the method of the first aspect of the present disclosure.
There is provided a computer readable storage medium. The computer readable storage medium has a computer program stored thereon. When the computer program is executed by a processor, a method of the first aspect of the present disclosure is executed.
It should be understood, the summary of the present invention is not intended to limit key or important features of the present disclosure and is not intended to limit the scope of the present disclosure. Other features of the present disclosure will be readily understood by the following description.
The above and additional features, aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following descriptions made with reference to the drawings. In the drawings, several embodiments of the present disclosure are illustrated in an example way instead of a limitation way, in which:
Embodiments of the present disclosure will be described in more detail below with reference to accompanying drawings. Although some embodiments of the present disclosure are illustrated in the drawings, it should be understood, the present disclosure may be embodied in various forms and should not be construed as limited to embodiments set forth herein. A more complete understanding of the present disclosure may be provided with embodiments of the present disclosure. It should be understood, drawings and embodiments of the present disclosure may be considered illustrative only and not to limit a scope of the present disclosure.
In the description of the embodiments of the present disclosure, the term “comprises” and the like are understood as open-ended, i.e., “including but not limited to”. The term “based on” should be understood as “based at least in part”. The term “one embodiment” or “an embodiment” should be understood to mean “at least one embodiment.” The terms “first,” “second,” and the like may refer to different or identical objects. Other explicit and implicit definitions may also be included below.
In the description of embodiments of the present disclosure, as understood by those skilled in the art, the term “clustering” may refer to a process of attempting to divide samples included in a sample set into subsets that are generally disjoint. Each subset may be called as a “cluster.” Each cluster may have some potential concepts. It should be noted that, although the samples included in the sample set may have pre-annotated categories in the present disclosure, the categories may be not used in the clustering process. In addition, in some cases, a resulting cluster (which may be considered as another sample set) may be further clustered to obtain subsets each including one or more samples included in the resulting cluster (also called as a sub-cluster).
In the description of embodiments of the present disclosure, as understood by those skilled in the art, the term “neural network” may refer to a broadly parallel interconnected network composed of adaptive simple units. Organization of the neural network may simulate an interactive behavior of a biological nervous system to a real-world object. In the neural network, a most basic component is a “neuron” model, which is the “simple unit” defined above.
Traditionally, as mentioned above, there are two schemes for “filtering” or screening sample data, namely a scheme of sample filtering based on a manual rule and a scheme of sample filtering based on semi-supervised learning. In the scheme based on the manual rule, it is necessary to manually find a rule of error samples, to construct a template of the error samples, and to remove the error samples (or noises) by template matching to obtain filtered samples. The above scheme may be costly to construct the rule and may have a limited application scope. Therefore, the above scheme may be only applicable to samples that have an explicit rule and may be represented by a template.
In the scheme based on semi-supervised learning, the machine learning model may be trained with a small number of high-quality samples selected in advance. In addition, the trained model may be used to predict total samples to select a sample with a high degree of confidence and the sample with the high degree of confidence is added into a high-quality sample set. The above scheme may rely on the quality of initial training samples on the one hand and the high-quality samples selected is likely to fit the initial training sample on the other hand, and thus it is difficult to cover the entire sample space.
A task of the supervised learning task often requires large-scaled and high-precision annotated samples. Quality of the annotated samples may affect a learning effect of the supervised learning. Due to a high cost and inefficiency of a manner of manually annotating samples, a manner of mechanically and automatically annotating samples may be employed in many applications. The manner of mechanically and automatically annotating samples may annotate large-scaled data, but may have a lower accuracy than the manner of manually annotating samples, which limits a training effect of the machine learning model. Therefore, an efficient sample filtering and screening method may be needed to improve a quality of annotated samples for the machine learning, particularly the supervised learning.
According to embodiments of the present disclosure, a sample processing method is proposed to determine a high-quality sample from total samples with pre-annotated categories. In the method, the samples included in the sample set may be clustered based on feature representations of the samples and a purity of each cluster obtained by clustering the samples may be determined based on the categories of the samples. For each cluster, different post-processing strategies may be employed based on the purity of each cluster to determine the filtered samples. In this way, the high-quality sample may be determined from the total noisy sample for a subsequent task of supervised learning. Thus, the solution of the present disclosure may advantageously achieve automatic, efficient, and low-cost sample filtering.
Embodiments of the present disclosure will be described below with reference to the drawings.
The sample set 101 (also called as initial sample set 101 in the description) may include multiple samples. As illustrated in
The samples 110 may be of various types including, but not limited to, text, image, video, audio and the like. For example, the samples 110-1 to 110-9 may be one or more articles, one or more segments of text, or one or more statements, respectively. In some examples, the samples 110-1 to 110-9 may be one or more images, one or more segments of video, respectively. Embodiments of the present disclosure are not limited in the type of samples.
The sample 110 may have a pre-annotated (or labelled) category, for example, one of categories A, B, and C as illustrated in FIG. In
In the description, the category may be used to indicate that samples have a same or similar attribute in an aspect. For example, in a case where the sample 110 is an article, the category of the sample may be a type of the article. For example, the samples having the categories A, B, and C may be labeled as a news article, a review article, and a popular science article, respectively. In a case where the sample 110 is an image, the category of the sample may be a type of an object contained in the image. For example, the samples having the categories A, B, and C may be labeled as containing humans, animals, and plants, respectively. The category may indicate various same or similar attributes of samples as needed, and the scope of the present disclosure is not limited thereto.
The samples 110 may be annotated with the categories A, B, and C in various methods. For example, the samples may be manually labeled. The samples 110 may also be obtained by data mining and may have predetermined categories. The categories of samples 110 may also be generated by other models or systems, and the scope of the present disclosure is not limited thereto.
Generally, the pre-annotated category may be inaccurate, resulting in noise introduced into the sample set 101. That is, the sample set 101 may include noisy samples. For example, the sample 110-7 pre-annotated as the category B illustrated in
The computing device 102 may process the sample set 101 with the method disclosed herein to determine at least some of the samples 110 as filtered samples. For example, as illustrated in
It should be understood, although samples 110-1, 110-2, 110-5, 110-6, and 110-9 are determined as the filtered samples as illustrated in
In order to more clearly understand the sample processing method according to embodiments of the present disclosure, embodiments of the present disclosure will be further described with reference to
At block 210, the computing device 102 determines a feature representation of samples 110 included in a sample set 101. Each of the samples 110 has a pre-annotated category. For example, the samples 110-1 to 110-4 all have the category A, the samples 110-5 to 110-7 all have the category B, and both the samples 110-8 and 110-9 have the category C. The feature representation may be used to indicate a subset of features associated with one or more attributes of the samples 110 The feature representation may describe the samples 110 abstractly or mathematically. The feature representation may be a multidimensional vector or a matrix. The computing device 102 may determine the feature representation in a method for mapping initial samples 110 onto a feature space as a feature vector.
In some examples, a predefined feature space may be used. The computing device 102 may determine feature values of the samples 110 of the sample set 101 in the predefined feature space and determine a feature vector formed by the feature values as the feature representation. For example, in a case where the samples 110 are text and the feature space is formed by words included in a dictionary, the feature representation of the samples 110 may be a word vector. In a case where the feature space is formed by expressions included in the dictionary, the feature representation of the samples 110 may be an expression vector.
In some examples, a machine learning method may be used to learn the feature representation of the sample. The computing device 102 may use a feature extraction model to determine the feature representation. The feature extraction model may be based on any machine learning method. For example, the feature extraction model may include a neural network such as a convolutional neural network CNN, a recurrent neural network, and the like.
The computing device 102 may input the sample set 101 into the feature extraction model, i.e., the neural network, to obtain neurons of a hidden layer associated with the sample 110 of the sample set 101. The computing device 102 may further determine the feature representation of the samples 110 included in the sample set 101 based on the neurons of the hidden layer. That is, the computing device 102 may determine a generated hidden layer vector as the feature representation of the samples 110. For example, in a case where the samples 110 are text, a convolutional neural network CNN classifier may be used for training and the hidden layer vector generated during a process of training the model may be output as the feature vector of the sample.
A process of determining the feature representation of the samples 110 by the neural network may be described below with reference to
The computing device 102 may determine the feature representation of the samples 110 based on, for example, the neurons 321, 322, 323 and 324 of the hidden layer. The computing device 102 may determine values outputted from the neurons 321, 322, 323 and 324 of the hidden layer as values of the feature vector in respective dimensions and determine the feature vector as the feature representation. As illustrated in
It should be understood, the neural network 300 illustrated in
In examples, the feature representation may be determined based on data of the hidden layer generated by the trainable neural network, rather than a direct representation of sample features. The feature representation may indicate a closely related representation to a target, facilitating subsequent clustering. Furthermore, it should be noted, in an example, since the neural network is used to acquire a hidden layer vector representation of the sample 110, a classification accuracy of the neural network model is not strictly required, and the total noise samples can be directly used for training.
Turning to
A result of the clustering may typically include n clusters. Each cluster may include a different number of samples 110.
With reference to
In some examples, a ratio of the number of samples included in a category having a largest number of samples to the number of samples included in the cluster may be used as the purity of the cluster. Taking the cluster 402 illustrated in
The computing device 102 may determine a category having a maximum number based on the number of samples corresponding to each category A, B, and C. For example, the computing device 102 may determine that the number of samples corresponding to the category B is maximal in the cluster 402, and the maximum number is 60. The computing device 102 may determine the purity of cluster 402 based on the maximum number and a total number of samples included in the cluster 402. For example, in a case where the total number of samples included in the cluster 402 is 100, the purity of cluster 402 may be determined to be 60/100=0.6. The purity of other cluster (e.g., the cluster 401 and the cluster 403) may be determined in the same manner.
In some examples, information entropy may be used to determine the purity of the cluster. Equation (1) may be used to calculate the information entropy H for each cluster 401, 402 and 403.
H=−Σ
i=1
k
p
i log pi (1)
where pi represents the ratio of the number of samples included in the ith category to the total number of samples included in the cluster, and k represents the total number of categories of the samples included in the cluster. For example, for the cluster 402 illustrated in
It should be noted, the higher the chaotic degree of the categories of samples included in the cluster, the larger the information entropy H. Therefore, a reciprocal 1/H of the information entropy may be used as the purity of the cluster 402. Purities of other clusters (e.g., the cluster 401 and the cluster 403) may be determined in the same manner
At block 240, the computing device 102 determines filtered samples from the samples included in the cluster based on the purity determined at block 230. The computing device 102 may employ different post-processing strategies based on different purities to obtain high-quality samples from each cluster 401, 402 and 403. For example, different post-processing strategies may be adopted in three cases based on the purities. A first case is that the cluster includes a single category of samples (for example, the category A of samples). A second case is that the cluster includes multiple categories of samples and has a high purity (e.g., higher than a threshold). A third case is that the cluster includes multiple categories of samples and has a low purity (e.g., lower than a threshold). The process of determining the filtered samples based on the purity may be described in detail below with reference to
The sample processing method 200 according to embodiments of the present disclosure is described above. In this way, the entire sample set may be filtered and screened to obtain the high-quality annotated samples. Therefore, the solution of the present disclosure may filter and screen the samples, in particular, filter and screen the total large-scaled and noisy samples, thereby effectively removing noise samples, improving the quality of samples, and facilitating to improve an effect of large-scaled tasks of the supervised machine learning.
In some examples, the computing device 102 may perform the method 200 or a part of the method 200 illustrated in
In examples in which the feature representation is determined based on the neural network described above, blocks 210-240 illustrated in
The computing device 102 may perform a clustering on the filtered sample set 103 based on the updated feature representation, to update the filtered samples based on a new result of the clustering (i.e., generated clusters and purities of the generated clusters). That is, the computing device 102 may repeat the blocks 230 and 240 of
In examples described above in which the feature representation is determined based on the predefined feature space, the blocks 220-240 of
Compared with the initial sample set 101, performing the clustering on the filtered sample set 103 may be implemented with different clustering algorithms or with different clustering parameters (e.g., a clustering distance) or with a combination thereof. The computing device 102 may further filter the samples based on the clusters obtained with the above-mentioned methods, to update the filtered samples.
In this case, a better result of the clustering may be achieved by determining the high-quality samples. The better result of the clustering may facilitate to further obtain higher-quality samples until a termination condition is met. The termination condition may be designed according to a specific application scenario. For example, the termination condition may be that, in the result of the clustering, a ratio of the number of samples included in the cluster having a low purity to the total number of samples included in the sample set 101 is less than a predetermined threshold. The termination condition may also be that the number of filtered samples is less than a predetermined number.
In an example, the higher-quality samples may be further obtained by an iteration operation to improve the quality of samples ultimately obtained. In this way, the quality of samples may be further improved, thereby facilitating to improve an effect of a subsequent task of the supervised learning.
As mentioned above with reference to
Referring to
At block 520, the computing device 102 may determine whether the categories of samples included in the cluster 401 (or included in the cluster 402) are same to each other. In a case where the computing device 102 determines at the block 520 that the categories of samples included in the cluster are same to each other, the method 500 may proceed to the block 530. For example, for the cluster 401, the computing device 102 may determine 520 that the categories of samples included in the cluster 401 are all the category A. At the block 530, the computing device 102 may determine all of the samples included in the cluster 401 as the screened samples. For example, the samples 110-1, 110-2, etc. included in the cluster 401 may be determined as the screened samples.
When the computing device 102 determines at the block 520 that the categories of samples included in the cluster are different for each other (i.e., the samples included in the cluster correspond to multiple categories), the method 500 may proceed to the block 540. For example, for the cluster 402, the computing device 102 may determine that the samples in the cluster 402 correspond to the categories A, B, and C respectively. In this case, the cluster 402 includes multiple categories of samples and has a purity greater than the purity threshold, which means that a certain category of samples is the majority of the cluster 402.
At block 540, the computing device 102 may determine the number of samples corresponding to each category for the cluster 402. For example, as described above, the computing device 102 may determine that, in the cluster 402, the number of samples corresponding to the category A may be 30, the number of samples corresponding to the category B may be 60, and the number of samples corresponding to the category C may be 10. At block 550, the computing device 102 may determine a category having the largest number of samples based on the number of samples for each category. For example, the computing device 102 may determine that, in the cluster 402, the number of samples corresponding to the category B is maximal.
At block 560, the computing device 102 may determine samples of the determined category as the filtered samples. For example, the computing device 102 may determine samples of the category B in cluster 402 (such as the samples 110-5 and 110-6) as the filtered samples.
For other categories of samples than the category B (such as the samples 110-3, 110-8 and the like), different processing may be employed depending on different application scenarios, task requirements, and the like. For example, in a case where the number of samples of other categories is smaller than the total number of samples included in the sample set 101, the samples of these other categories may be discarded directly. If the subsequent task of the supervised learning needs higher requirements on the sample quality, the samples of other categories may be outputted, such that the samples of other categories may be manually annotated.
In some examples, automatic error correction may also be performed on samples of other categories in the cluster 402. For example, in a case where more than a certain proportion (such as 95%) of samples 110 included in the cluster 402 are of the category B, the samples of the categories A and C included in the cluster 402 may be corrected as samples of the category B. Further, corrected samples may also be determined as the filtered samples.
Turning to the block 510 again, in a case where the computing device 102 determines at the block 510 that the purity of the cluster is not greater than the purity threshold, different processing may be taken based on the number of samples included in the cluster. That is, the method 500 may proceed to the block 570. For example, for the cluster 403, the computing device 102 may determine at the block 510 that the purity of the cluster 403 is lower than the purity threshold, which means that the categories of the samples included in cluster 403 are uniformly distributed.
At block 570, the computing device 102 may determine a ratio of the number of samples included in the cluster 403 to the total number of samples included in the sample set 101. At block 580, the computing device 102 may determine whether the ratio exceeds a ratio threshold. The ratio threshold is also called as an upper ratio threshold in the present disclosure. In a case where it is determined that the ratio exceeds the upper ratio threshold, the number of samples included in the cluster 403 is large and the method 500 may proceed to the block 590.
At block 590, the computing device 102 may perform the clustering on the samples (such as the samples 110-4, 110-7, 110-9, etc.) included in the cluster 403 to obtain a result of the clustering. The result of the clustering may be one or more subsets (also called as one or more sub-clusters) including the samples 110-4, 110-7, 110-9, etc. included in the cluster 403. At block 595, the computing device 102 may determine at least part of the samples included in the cluster 403 as the filtered samples, based on the result of the clustering. For example, the computing device 102 may apply the blocks 230 and 240 of
In a case where the computing device 102 determines at the block 580 that the ratio obtained at the block 570 does not exceed the upper ratio threshold, the computing device 102 may take different processing depending on the number of samples included in the cluster 403. When the number of samples included in the cluster 403 is small, the samples included in the cluster 403 may be discarded. For example, in a case where the ratio corresponding to the cluster 403 determined at the block 570 is less than another ratio threshold (also called as a lower ratio threshold, for ease of discussion), all samples included in the cluster 403 may be discarded.
In a case where the number of samples included in the cluster 403 is moderate, for example, the ratio corresponding to the cluster 403 determined at the block 570 is greater than the lower ratio threshold, different processing may be taken depending on a particular application scenario. For example, for a case that a requirement on sample accuracy is high and a total number of samples is not large enough, the samples included in the cluster 403 may be output and manually annotated. It is also possible to sampling the samples included in the cluster 403 and manually determine the subsequent processing. It is also possible to discard all the samples included in the cluster 403, or to reserve all the samples for optimization in a next iteration.
In some examples, the sample filtering module 640 may include a first sample determining module, configured to, in response to determining that the purity is higher than a purity threshold, determine the filtered samples based on the categories of the samples included in the cluster.
In some examples, the first sample determining module may include a second sample determining module, configured to, in response to determining that the categories of the samples included in the cluster are same to each other, determine the samples included in the cluster as the filtered samples.
In some examples, the first sample determining module may include a first number determining module, configured to, in response to determining that the categories of the samples included in the cluster are different, determine the number of samples for each category. In addition, the first sample determining module may include a maximal category determining module, configured to determine a target category having a maximal number of samples for the cluster based on the number of samples for each category. Furthermore, the first sample determining module may include a third sample determining module, configured to determine samples of the target category as the filtered samples.
In some examples, the sample clustering module 640 may include a sample ratio determining module, configured to, in response to determining that the purity is lower than a purity threshold, determine a ratio of the number of samples included in the cluster to the number of samples included in the sample set. In addition, the sample clustering module 640 may include a second clustering module, configured to, in response to determining that the ratio exceeds a ratio threshold, perform the clustering on the samples included in the cluster to obtain a result of the clustering. Furthermore, the sample clustering module 640 may include a fourth sample determining module, configured to determine at least part of the samples included in the cluster as the filtered samples at least based on the result of the clustering.
In some examples, the first representation determining module 610 may include a sample applying module, configured to input the sample set to a feature extraction model, to obtain neurons of hidden layer related to the sample set. In addition, the first representation determining module 610 may include a second representation determining module, configured to determine the feature representation of the samples included in the sample set based on the neurons of the hidden layer.
In some examples, the device 600 may further include a first subset determining module, configured to determine a subset of the sample set at least based on the filtered samples, the subset including filtered samples obtained from at least one cluster associated with the sample set. In addition, the device 600 may further include a first subset applying module, configured to input the subset into the feature extraction model to obtain an updated feature representation of samples included in the subset. Furthermore, the device 600 may further include a first sample updating module, configured to perform the clustering on the subset to update the filtered samples based on a result of the clustering based on the updated feature representation.
In some examples, the first representation determining module 610 may include a third representation determining module, configured to determine feature values of the samples included in the sample set in a predefined feature space as the feature representation.
In some examples, the device 600 may further include a second subset determining module, configured to determine a subset of the sample set at least based on the filtered samples, the subset including filtered screens obtained from at least one cluster associated with the sample set. In addition, the device 600 may further include a second sample updating module, configured to perform the clustering on the subset based on the feature representation to update the filtered samples based on a result of the clustering.
In some examples, the first purity determining module 630 may further include a second number determining module, configured to determine the number of samples of each category for the cluster. In addition, the first purity determining module 630 may further include a maximal number determining module, configured to determine a maximal number of samples based on the number of samples of each category. Furthermore, the first purity determining module 630 may further a second purity determining module, configured to determine the purity based on the maximal number of samples and a total number of samples included in the cluster.
Components of the device 700 are connected to the I/O interface 705, including an input unit 706, such as a keyboard, a mouse, etc.; an output unit 707, such as various types of displays, loudspeakers, etc.; a storage unit 708, such as a magnetic disk, a compact disk, etc.; and a communication unit 709, such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices through a computer network, such as Internet, and/or various telecommunication networks.
The various procedures and processing described above, such as any of the method 200 and the method 500, may be performed by the processing unit 701. For example, in some embodiments, the method 200 can be implemented as a computer software program that is tangibly enclosed in a machine readable medium, such as the storage unit 708. In some embodiments, some or all computer programs may be loaded and/or installed onto the device 700 via the ROM 702 and/or the communication unit 709. One or more blocks of the method 200 described above may be performed when a computer program is loaded into the RAM 703 and executed by the CPU 701. In some example, the CPU 701 can be configured to perform any of the method 200 and the method 500 by any other suitable means (e.g., by means of firmware).
Functions described above in the present disclosure may be performed at least in part by one or more hardware logic components. For example, and without limitations, exemplary types of hardware logic components that may be used include: field programmable gate array (FPGA), application specific integrated circuit (ASIC), application specific standard product (ASSP), system on chip (SOC), complex programmable logic device (CPLD).
Program code for implementing the methods of the present disclosure can be written in any combination of one or more programming languages. The program code may be provided to a general purpose computer, a special purpose computer or a processor or controller of other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes that the functions/operations specified in the flowcharts and/or block diagrams are implemented. The program code may be executed entirely on the machine, partly on the machine, as part of the stand-alone software package, and partly on the remote machine or entirely on the remote machine or server.
In the present disclosure, a machine-readable medium can be a tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium can be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of machine-readable storage media may include electrical connections based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In addition, although the operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in the order, or that all illustrated operations should be performed to achieve the desired results. Multitasking and parallel processing may be advantageous in certain circumstances. Likewise, although several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can be implemented in a plurality of implementations, either individually or in any suitable sub-combination.
Although the present disclosure has been described with reference to several specific embodiments, it should be understood, the present disclosure is not limited to the specific embodiments disclosed. The present disclosure is intended to cover various modifications and equivalent arrangements within the spirit and scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
201811043185.9 | Sep 2018 | CN | national |