This disclosure refers to the field of electronic equipment input control, especially electronic equipment information input, including a predictive text input method and device.
In recent years, mobile communication terminals such as mobile phone and tablets have become widely available. Input methods in mobile communication terminals are extremely important for the daily use of users. At present, most input methods may support prediction in typing. For the normal prediction abilities, they may be realized like this: if users want to type the word “special”, they will type the first four letters, s-p-e-c, or even more letters one by one, then input method may predict the word which users want to type in according to entered letters. Such kinds of input methods may only predict words which users are currently typing in. Also, in order to improve the prediction accuracy, users normally need to type in half or more than half of the letters to get the prediction results, which inevitably influences users' input efficiency. Actually, such methods may no longer satisfy users' needs for speedy input.
What's more, for the higher prediction accuracy, such methods normally need a larger space in database. And the current popular prediction methods are always combined with cloud database. However, when the database was set in the cloud, every prediction through cloud database may be inevitably faced with bad connection due to the restriction of network, which not only wastes vast resources, but also unable to provide fluent input experiences.
To sum up, it is necessary to provide an input method with higher prediction efficiency and more influent prediction input experiences.
This disclosure is aim to provide efficient prediction techniques so as to report back to users the prediction result which is more corresponding with their expectation, with a more fluent input experience.
This disclosure provides an efficient input prediction method based on one aspect, including detecting an input by a user; acquiring a prediction basis according to a historical text which the user has inputted and the current input position; searching a database according to the prediction basis to obtain a prediction result. The said prediction basis is an input text based on a preset word length before the current input position. The prediction result at least includes two stages of prediction candidate words in subsequent based on the prediction basis.
This disclosure also provides an efficient input prediction device based on another aspect, including a detecting module, which is adapted to detect and record a current input position and a text which the user is typing; a predicting module, which is adapted to form a prediction basis according to an input text and a current input position, search a database according to the prediction basis and obtain a prediction result. Therein, the prediction basis is an input text based on a preset word length before the current input position and each prediction result at least includes two stages of prediction candidate words in subsequent based on the prediction basis; a database, which is adapted to store words.
By setting a word length, this disclosure selects one or several entered words as a predicting basis and acquires at least two stages prediction candidate words in subsequent based on the prediction basis, therefore, prediction input results, with higher prediction efficiency, may be provided more quickly.
This disclosure also provides an efficient input prediction method based on its third aspect, including: detecting an input by a user; acquiring a prediction basis according to a input text history and a current input position, said prediction basis is an input text based on a preset word length before the current input position; searching a database according to the prediction basis to obtain a prediction result, the prediction result at least includes two stages of prediction candidate words in subsequent based on the prediction basis; storing said prediction result locally, detecting a user's further input, screening the local-saved prediction result according to a user's typing, and reporting back to users all or part of the prediction results.
This disclosure provides an efficient input prediction device based on its fourth aspect, including a detecting module, which is adapted to a text which the user is typing and a current input position; a prediction module, which is adapted to form a prediction basis according to an input text and a current input position, search a database according to the prediction basis and obtain a prediction result. Therein, the prediction basis is an input text based on a preset word length before the current input position and each prediction result at least includes two stages of prediction candidate words in subsequent based on the prediction basis; a database, which is adapted to store words; a screening module, which is adapted to record users' further input and screening the prediction results according to the detecting module; a feedback module, which is adapted to report back to users the screened results.
By predicting at least two stages of candidate words in subsequent based on the prediction basis and saving said prediction results including the two stages of prediction candidate words locally, it effectively avoids the delay caused by network transmission, even employing a cloud database, and improves user experience.
By reading the details on unrestricted examples of attached Figures, some other features, purposes and advantages of this disclosure may be more obvious:
The following article will introduce specific embodiments of this disclosure, effective input prediction method and device, combining with attached Figures.
With reference to
Through the mobile communication terminal 110, the prediction device 120 may record the text inputted by input device 101, and take a preset word length of text entered previously, as a prediction basis. According to one embodiment, the prediction device 120 may acquire the current input position, such as detecting the current cursor position or detecting current characters which correspond with the cursor, and based on the current input position, acquire a preset word length of text entered before the current the current input, i.e. the prediction basis. Wherein, the preset word length may be adjusted by the computation capability of prediction device 120 and the storage capacity of mobile communication terminal 110. For example, the word length is set to be a natural number which is larger than 2.
In one embodiment, the preset word length is the number of all input or part of input words. For example, if the preset word length is 5, then the prediction basis shall be five words fully or partly input before the current input position; to be specific, when a user has already input “Fast food typically tends to skew”, and the preset word length equals to 5, then the prediction basis is “food typically tends to skew”; when a user is inputting the first two letters “mo” of “more”, and also has already input “Fast food typically tends to skew”, the prediction basis is “typically tends to skew mo”. In another embodiment, the begin symbol may also occupy one word length. Such as when preset word length is 3, and a user inputs “Fast food”, the prediction basis shall be “[begin symbol]+fast+food”.
Then, based on the prediction basis, the prediction device 120 may query in the database 130 and get a prediction result. Wherein, the prediction result based on the prediction basis may at least include two stages of prediction candidate words in subsequent, in a context relation with the prediction basis.
According to one embodiment, the prediction device 120 may acquire prediction results by predicting progressively. First of all, the prediction device 120 may get the first stage of a prediction candidate word.
In an embodiment, the prediction device 120 will conduct a further segmentation on the prediction basis and inquire in the database 130 based on the segmentation result. Take the prediction basis having a word length of three as the example. The prediction device 120 will firstly detect the current cursor position or detect the characters corresponding to the current cursor position, and obtain a text sequence of at least three word lengths before the current position. For example, a user has input a text: “I guess you are”, and the preset word length is 3, then the prediction basis is “guess you are”.
Then, the prediction device 120 conducts a segmentation based on the prediction basis and acquires preorder word in database 130, which including libraries of words in many stages, such as a first stage word library, a second stage word library, a third stage word library or even a higher stage word library. The stages of word library represent the word number stored in every storage cell of the library. For instance, in the first stage word library, each storage cell includes only one word, while in the second stage word library, each storage cell includes two words. The prediction device 120 cut the prediction basis and thus obtains the preorder words corresponding to the stage word library. There is a relationship between the word number N of a preorder word and word library stages M: N=M−1. For example, in the segmentation on “guess you are”, the preorder word is obtained as “are” in the second stage word library, the preorder is obtained as “you are” in the third stage word library, and the preorder word is obtained as “guess you are” through the fourth stage word library. A query result corresponding to the preorder word may be acquired by searching a storage cell in a corresponding stage word library.
According to one embodiment, the first stage word library stores a single word Wi1 which is possibly input by a user and the probability of occurrence P (Wi1) of the single word Wi1·(Wi1)=1. For example, the word “you” and the probability of occurrence of “you” is 0.643%. In the second stage word library, it respectively stores every two words which likely occur together. Such as, word Wi,12 and word Wi,22 (i=1, . . . N), the ordering of these two words and the probability of co-occurrence of these two words in that ordering, such as P(Wi,12*Wi,22) or P(Wi,22*Wi,12). In the third stage word library, it respectively stores every three words which likely occur together, such as word Wi,13, word Wi,23 and word Wi,33 (i=1, . . . N), the ordering of these three words and the probability of co-occurrence of these three words in that ordering, such as P(Wi,13*Wi,23*Wi,33) or P(Wi,13*Wi,33*Wi,23) or P(Wi,23*Wi,13*Wi,33) or P(Wi,23*Wi,33*Wi,13) or P(Wi,33*Wi,13*Wi,23) or P(Wi,33*Wi,23*Wi,13). After acquiring a preorder word corresponding with every stage word library, the prediction device 120 search in the corresponding stage word library according to the preorder word and the ordering respectively, and get a query result. The combination of the query result and the preorder words makes up a storage cell in the storage cell corresponding stage word library. For example, according to the preorder word “are” in second stage word library, the prediction device 120 get a query result, which is a word that might be input after the word “are”, such as: “a”, “beaches”, “cold”, “dogs”, “young” and so on; furthermore, the prediction device 120 gets the query result by searching in the third stage word library according to the preorder word “you are”, such as a, beautiful, correct, dreaming, young and so on.
Then, the prediction device 120 may further optimize query results obtained from every stage word libraries. To be specific, the prediction device 120 may sort the query results according to the probabilities from big to small; or through the probability threshold, the prediction device 120 may screen all query results from every stage word libraries, and thus, in the premise of probability reserve rate, the amount of calculation may be reduced, the power consumption may be saved and the reaction speed may be improved.
According to a specific embodiment, the database 120 only stores all words Wi1 in the first word library and the probability of occurrence of every word P (Wi1), and further forms a second stage or a third stage or a higher stage word library, on a basis of the words in the first stage word library and the probabilities of occurrence of those single words in a storage cell in a corresponding stage word library. Take the ith stage word library as an example. In the ith stage word library, every storage cell stores i words and every word among those i words in a storage cell can be a word in the first stage word library. Therefore, theoretically, when the first stage word library includes N words, the number of the storage cells in ith stage word library should be represented as Ni. With the increase of the number i, the amount of increasing storage cells is inevitably large. In addition, the probability of occurrence of every word in the first stage word library is random. And when some words appear simultaneously, the ordering of every word and related words may affect the probability of occurrence of every word. By considering the above factors, in this embodiment, different stage word libraries shall be conform to certain conditions. Specifically, take the second stage word library as an example. When i=1 . . . M1, the corresponding storage cell must be conform to the following condition, that is they have same first words, i.e., the first words, Wi,12, in these storage cells, satisfy: Wi,12=Wj,12 (j=2, . . . , M1), but the second words Wi,22 in these storage cells may be different. Similarly, when i=M1+1, . . . M2, the first words in the corresponding storage cells are the same, that is WM1=1,12=Wj,12 (i=M1+2, . . . M2), but the second words Wi,22 are different. Thus, in the second stage word library, for at least one storage cell with same first word, the probability is calculated as that of the second word occurring after the first word, i.e. P(Wi,22|Wi,12). In one embodiment, sort words Wi,22 according the corresponding probability P(Wi,22|Wi,32). In another embodiment, set a probability threshold Pt, screen all words Wi,22 with the same first word Wi,12 according to the set probability threshold, and only store the combination of the first word Wi,12 and part of the second words. Similarly, go through every first word Wi,12 in each storage cell in the second stage word library, according to its storage order in the first stage word library and the corresponding probability P(Wi,12) in the first stage word library, and form the second stage word library.
In this embodiment, see
In another embodiment, numbers, letters or other forms of codes may be employed to replace the storage of word Wi,jT or simplify the storage of probability of occurrence. Thus, the amount of calculation may be further reduced, the power consumption may be saved and the reaction speed may be improved. For example, according to the word storage order in the first stage word library, the words of which probability of occurrence is larger than probability PT is set as 1, and the words of which probability of occurrence is smaller than PT is set as 0, then the storage of words and the corresponding probability may be simplified to the storage of 0 and 1, thus the amount of calculation maybe largely reduced.
Then, the prediction device 120 may acquire a query result of every preorder word in every stage word library, and set a weight.
In one embodiment, set a weight according to stages of every query result. For example, for the query results a1, a2 . . . an from the second word library, assign a weight T1; for the query results b1, b2 . . . bn from the third word library, assign a weight T2; for the query results c1, c2 . . . cn from the fourth word library, assign a weight T3. In the specific embodiment, those query results from higher stage word libraries may be set a higher priority. For example, there is a relationship between the corresponding weight Ti of the query result from the ith word library and the corresponding weight Ti of the query result from the jth word library: Ti>>Ti, among i>j.
In another embodiment, different weights may be assigned to every query result from a stage word library; based on the assigned weights, a weighting calculation may be conducted, and therefore, a query result of every stage word library may be acquired. For example, for all query results a1, a2 . . . ap in the second word library, the weight t1, t2 . . . tp may be assigned. Among which, said weight is associated with the historical input, the input context and the priority of the word.
When the prediction device 120 has acquired the first result, the prediction device 120 may further form a new prediction basis based on the original prediction basis and the first candidate words. And based on the new prediction basis, the prediction device 120 may search in the database 130 and get a new result, which are the second candidate words. For example, refer to
In one embodiment, according to the independent order of candidate words from the second stage prediction, the order of prediction results may further be acquired and reported to the user. To be specific, the prediction results may be sorted according to the historical input, the current context and the priority of every second stage candidate words. For example, see
In another embodiment, further comprising: after acquiring the second stage candidate words, referring to the sorting of the current first stage candidate words, synthetically weight the candidate words so as to acquire the order may of prediction results including the first stage prediction candidate words and the second stage prediction candidate words. For example, see
According to the other embodiment, the prediction device 120 may also acquire the prediction results with multi-level prediction. For example, after acquiring the prediction basis, the prediction device 120 may conduct a segementation on this prediction basis and acquire preorder words to be searched in the database 130. Then search the preorder words in every stage word library of the database 130. There is a matching relation between the stage of word library M′ and the word number N′ of a preorder word: N′=M′−x, wherein x is the candidate words number. Then the query may be conducted in every stage word library in a similar way as described above to get the prediction results.
When the prediction results include at least two or above stages candidate words, such prediction results, based on the same prediction basis, will be: it is composed of same words, such as A+B, but with different orders in different prediction results. Such as the prediction result T1 is A+B, and the prediction result T2 is B+A. In one embodiment, such prediction result with same words but different orders would be regarded as the different prediction results and sorted with other prediction results. In another embodiment, firstly determine the prediction results according to grammar and the user's historical input first. When there is no influence on the overall meaning of the prediction results by switching the order of the words, merge these prediction results, comprising same words and having a same or almost same meaning even with a changed word order together. Then, pick anyone according to the historical input or the priority and feed back to the user, so that the prediction accuracy may be improved in a limited feedback area. For example, the acquired prediction results include: prediction result 1 “”, prediction result 2 “
”. Even the orders of the words consisting the prediction result are different, the meanings are not largely changed with the changing of the order of the words in the perspective of grammar. Then these two prediction results may be merged into one and any of them is picked, randomly or according to the historical input or the priority of the prediction results, to feedback to the According to another embodiment, the prediction 120 may directly sent acquired prediction basis to the database 130 and match with data recorded in database 130, to select a matched result as a corresponding prediction result. For example, a prediction basis includes a set of word length of words, i.e. 2 or 3 words. The prediction device 120 will separate the prediction basis into a combination of several single words, extract each corresponding word based on its order in the prediction basis and retrieve database 130 one by one. For example, see
The above search processes in database 130 or every stage word library of database 130 may further include: a grammar and semantic analysis based on the prediction basis. Furthermore it may include combining analysis results and query results from the database 130, or screening n query results based to analysis results, so as to improve prediction accuracy. According to one embodiment, see
When acquiring the prediction results, the prediction device 120 may send all prediction results and the corresponding orders and save them in the mobile communication terminal 110.
According to one embodiment, see
According to another embodiment, the prediction device 120 continues to detect users' input from mobile communication terminal 110 and predict the incoming action. The prediction device 120 may choose not to display all acquired results or choose to report the first stage candidate words to users, as referred to
When the prediction device 120 detects a further input, it will record the current input, get the current characters, and then update the acquired prediction result based on the current input text, so as to higher the priority of part of the prediction results, or screen the acquired prediction, store only the prediction meeting the screening demands or feed back only the prediction meeting the screening demands to the user. The prediction results that meets the screening demands or makes the priority higher include: those which starts with one or more letters same as those input by the user. For example, see
In another embodiment, according to the current input and the original prediction basis, the prediction device 120 may form a new prediction basis and retrieve it in the database 130 to get a corresponding prediction result.
When the user is detected, by the predict on device 120, to have selected a candidate word in the candidate bar or confirmed an input word, then prediction device 120 detects and acquires the candidate words selected or confirmed, and searches in the first stage prediction candidate words according to acquired words. When there is a predict on result having a word in the first stage candidate words same with that acquired, prediction device 120 presents the second stage candidate word of the prediction result to the user through communication terminal 110. For example, when the prediction device 120 acquires the prediction results, “we spent”, “we worked”, “we shared”, “when I”, “when she”, “you had” by acquiring, communication terminal 110 further detects the user's input. When it is detected that the user selects “we” or confirms to input “we”, see
In another embodiment, the prediction device 120 may continue to detect users' operation. Every time when the user finishes an input of a word, the prediction device 120 may be triggered to conduct anew search. To be specific, the prediction device 120 may form a new prediction basis according to a current input and the original prediction basis, queries in the database 130 and obtains a prediction result based on the update prediction basis. For example, according to the prediction basis “forget the time”, the prediction basis 120 acquires prediction results “we spent”, “we worked”, “we shared”, “when I”, “when she”, “you had” and so on. When the user selects “we” or confirms to input “we”, see
In another embodiment, the disclosure also includes displaying a set number of prediction results to the user and presenting the change of prediction results in real time while the user is inputting or has selected or confirmed. For example, the characters or the words in the prediction results same with that input or selected or confirmed by the user may be highlighted, so as to provide a more direct feedback. For instance, the prediction device 120 displays the prediction results “we spent”, “we worked”, “we shared”, “when I”, “when she”, “you had” to users. Then, the prediction device 120 continues to detect the user's input. When the following input is detected as “w”, the prediction device 120 may screen the prediction results or update the priority of the prediction result according to the detected character, and then update those present according to the screened or updated result, and highlight the current input character “w”. The prediction device 120 continues to detect the user's input in the keyboard. When the word “when” is further detected, the prediction device 120 may further update the display according the further input. For instance, the display is updated to be “when I”, “when she”, and “when” is highlighted, so that better users experience may be provided.
According to an aspect of the disclosure, based on a prediction basis, the prediction device 120 may acquire a prediction result in a cloud database or a local database 130 and save the prediction result in the local mobile terminals 110. With the multiple prediction stages, i.e. prediction results including at least two stages, and storing the prediction results in local terminal 110, therefore, prediction device 120 once detects that the current input is as the same as part or the whole of the first stage candidate words, it can quickly acquire an associated second prediction candidate word from the prediction results stored locally and present it to the user. On one hand, this can largely speed the prediction; on the other hand, it may reduce even avoid the delay caused by network transmission, and provide a better user experience.
In addition, by taking the use of a cloud database, since it may rely on the cloud terminal to predict, updating the cloud database regularly can make sure the accuracy of prediction and error correction, so as to avoid the update on local database to be too frequent.
See
To be specific, in step S110, when the user continues to input in a keyboard, detecting an input may include detecting an input text, for instance, obtaining a historical input by analyzing the input data including text, voice and so on. Step S100 may further include, detecting an input may include detecting a current input position; for instance, obtaining a current input position by detecting cursor coordinates, a cursor position, number of character corresponding to the cursor and other data. Step S110 may further include: get the prediction basis according to a current input position. Wherein, said prediction basis may be a preset word length of input text before the current input position.
According to one embodiment of the disclosure, database may further include several stages word libraries. Accordingly, step S120 may further include dividing the prediction basis to get at least a preorder word for inquiry. Wherein, the preorder words corresponds to each stage word library in database, and, the sum of the word number of a preorder word and that of the predication candidate words obtained according to the preorder word equals to the stage number of word library, which also is the word number stored in the minimum storage cells of the stage word library.
According to one embodiment of the disclosure, the prediction result may include an extra stage candidate word. Here, the step S120 may further include, get a prediction candidate word stage by stage based on said prediction basis. Take the prediction results including two stage candidate words as the example. Step S120 may include: obtaining the first stage candidate words based on the said prediction basis; obtaining the second stage candidate words based on the said first stage candidate words and the prediction basis.
The step S120 may further include: analyzing on prediction basis from every retrieve and screen on the prediction results based on the analysis result. For example, the analysis may include conducting a one-sided analysis or multi-analysis on semantics, grammar, and context and so on.
According
According to one embodiment of this disclosure, after acquiring the prediction results from the cloud database, it may further include storing said prediction results to local database. According to another embodiment, data in cloud database may be downloaded to local, so that a prediction result may be obtained by similar steps from the local database.
In step S130, continuing to detect an input may further include: when part of the word is further detected to be input, screen the prediction result based on the further input part, so that the first stage candidate word of the screened prediction result may include the further input part. For example, when an user further inputs “win”, then the first stage candidate words beginning with “win” or including “win” may be taken as the screened prediction result. When a selection of a word or an input of the whole word is detected, match the first stage candidate words of the prediction results with the selected or input word, and take the prediction result matched as a screened prediction result.
In step S130, feeding the screened prediction results back to users may further include: feed all screened prediction results back to the user. When presenting all the prediction results to the user, it may not make a distinction between the entered words or selected words; or it may highlight those words with different colors, capital or small form, fonts, bold, italic types and other marking means; or, it may feed back the rest candidate words other than the first stage candidate words.
In another embodiment, the above efficient predictive text input method may also present the prediction results to via multi-media. For example, it may display all acquired results, or it may mark the prediction candidate words of the prediction result in the candidate word list via tagging; or it may display candidate words in other area of the screen rather than the candidate word list; or it may report one or more words of one or more obtained prediction results to users via loudspeakers or other mediums; or it may feedback the prediction results via other multi-media.
See
The detecting module 200 further includes a detecting cell 210, which is adapted to detecting a current input position, and a recording cell 220, which is adapted to record the input.
See
See
See
In one embodiment, according to the prediction basis, the prediction module 300 may make a semantic and grammar analysis on the prediction basis and obtain the analysis result, wherein said analysis may include analyzing the prediction basis using the semantic rules and grammatical rules. Also, the prediction module 300 may further include screening the prediction results according to the analysis result.
See
The screen module 500, determining the further input of users based on results from the detecting module 200, may further include: when part of a word is detected to be input, screening the prediction results based on the further input part of a word, so that the first stage candidate words of the screened prediction results include or begin with the further input. When a word is detected to be selected or completely input, y matching the first stage candidate words with the word selected or input, so that the first stage candidate words of the screened prediction results are the words selected or input, or include the word selected or input, or start with the word selected or input.
The feedback module 600 may feed part of or all of prediction results back to the user. In one implementation, the feedback module 600 may include a display equipment, which may display all prediction results to users and identify those input or selected by the user vie a certain mark, or may display the rest part of prediction results based on users' inputs or selection. When presenting the prediction results, the prediction result may be displayed in the candidate words bar, or may be displayed in other area rather than the candidate words bar, such as the same side of candidate words bar, the top of the candidate words bar, the middle between the candidate words bar and the keyboard, or a preset place in the text display area, or the corresponding area in the keyboard. The display mode may be time-by-time display in accordance with the numbers of candidate words, or it may display all candidate words simultaneously. In another implementation, it may also include feed back one or more words of one or more obtained prediction results to users via other media equipments, such as a loudspeaker.
This disclosure may apply to many languages and shall not be limited by concrete languages published in examples. It shall understood by those in the art that the disclosure may apply to Indo-European languages, such as English, French, Italian, German, Dutch, Persian, Afghan, Finnish and so on; or Sino-Tibetan languages, such as Simplified Chinese, Traditional Chinese, Tibetic language and so on; or Caucasian Family languages, such as Chechen language, Georgia language and so on; or Uralic languages such as Finnish, Hungarian and so on; or North American Indian languages such as Eskimo, Cherokee, Sioux, Muscogee language and so on; or Austro-Asiatic family languages such as Cambodia, Bengalese, Blang language and so on; Dravidian languages, such as Tamil language and so on; or Altaic Family languages such as East Altai, West Altai and so on; or Nilo-Saharan Family languages such as languages used in north African or West African; or Niger-Congo Family languages, such as Niger language, Congolese, Swahili and so on; or Khoisan languages, such as Hottentot, Bushmen language, Sandawe and so on; or Semitic Languages such as Hebrew, Arabic, Ancient Egypt, Hause language and so on; or Austronesian family languages, such as Bahasa Indonesia, Malay language, Javanese, Fijian language, Maori and so on.
For the purpose of simple description, the limited stage word libraries or candidate words are taken as the example, with possible limited stages candidate words listed. However, those in the art should understand that the disclosure shall not be limited by the above stages of candidate words or candidate words number every time acquires. For example, the more predict stages there are, the more the candidate words are and the higher the accuracy is, however, since every time the transmit may cost more flux, and more space is needed as well. In practical use, the stages of candidate word libraries and the number of candidate words may be determined based on the accuracy, flux, storage space and so on.
The “word” described above refers to the minimum composition unit in input language whose meaning shall have contribution on sentences or paragraphs. It may employ the actual meanings, and also may be just the expression of certain semantemes which just to cooperate with context. For example, in Chinese, “word” means an individual Chinese character; in English, “word” may just be an English word. The “character” describe above means the minimum composition unit which correlates with words. “Character” may be the letters which composing of English words, or may be phonetic alphabets or strokes which composing of Chinese characters.
The specific embodiments are described above. It is understood that it is not limited tot the disclosed embodiments. A transformation or an amendment, within the scope of the claims, doesn't effect the spirit of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
201410345173.7 | Jul 2014 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2015/084484 | 7/20/2015 | WO | 00 |