The present invention relates to document indexing, full-text search, and retrieval, more particular to a computer-implemented method of domain-specific full-text document search.
Search in text documents as an instance of unstructured data is ubiquitous. Search in large text documents, such as the internet, is performed by using well-known algorithms, such as page-rank (Page, 2002). However, search on smaller collections such as at an enterprise or public administration level, which has much smaller collection and recall (as opposed to precision) pays an important role, different systems are employed, as embodied by many closed as well as Open Source solutions, such as Lucene and Solr (The Apache Software Foundation, 2017). Many other small-scale and specialized systems are available as well, such as Wolter Kluwer's ASPI (Wolters Kluwer, 2018).
The core technology used in these systems called “Full-text Search” is indexing the text, where the index is stored together with document ID and/or position in the text. The index is implemented efficiently so that given a search term in a query, the index can be quickly retrieved and the referred document and/or piece of text can be returned and displayed in some form to the user. Special attention is in the implementation paid to queries containing more search terms/words to perform the “join” operation in the search efficiently. In addition, statistical or other quantitative methods are used to rank the output of the search in case more or many documents are found and are to be presented to the user. To restrict the number of documents, or to focus the search, the search interface presents typically other choices to the user—faceted search, such as restriction to a certain period of document creation, author, type of the document, etc. In addition, especially in critical applications, documents are mostly manually assigned “keywords” that better represent the topic of the document and/or the documents are classified to a predefined list of topics. These topics and/or keywords are then presented to the user and used in the search as well or alone.
Similar methods can be used and are used in search in audio and video documents, with the additional step that a transcript has to be performed prior to indexing. Such transcript is done manually or automatically. Recently, automatic methods, such as Automatic Speech Recognition, can be applied to transcribe the speech to text with sufficient accuracy. Once transcribed, even if not perfectly, the text can be indexed using the same methods as in case of documents originally created as text. Keywords and topic classification are often applied to documents which are sometimes not transcribed, allowing at least non-full-text search.
Current methods for up to medium-sized collection of documents with approx. up to 10s of millions of pages suffer from three major problems, which are related to the properties of natural languages which convey by far most of the information contained in these documents:
morphology, inflection and derivation,
ambiguity—homonymy,
synonymy.
Morphology, inflection and derivation: many large languages like German, Arabic, Slavic languages and especially Ugro-finish languages like Hungarian, Finnish, Turkish use inflection and/or derivation, by changing the world from its base form called rd, rom to other morphologic form to express grammatical or semantic properties. These forms can range from one or two words per single lemma, e.g. for most English words, to tens of thousands of words, e.g. for some Finish words. During full-text search normalization to base form at indexing time or expansion to all forms at query time are performed. Even though current systems contain some of these approaches and do achieve 97-98% accuracy in lemmatization (Straka, 2016), they suffer from the other two problems mentioned.
Ambiguity—homonymy: many words or word forms in language are homonymous or polysemous, i.e. there is ambiguity in their interpretation, either grammatical or semantic. For example, “book” contained in these documents could have a meaning “to make a reservation” or a physical object to be read. Such an ambiguity has to be resolved using context and such a resolution might not be correct. Accuracy in polysemy disambiguation is still well below 80% on average. The common problem of any such method is that its result is categorical, i.e. it has to select one lemma or one meaning in polysemous cases, leaving no space for uncertainty and its handling.
Synonymy, i.e. the fact that the same fact or meaning can be expressed in several ways is another obstacle, since every user and every domain might use different words to express the same thing, leaving the user with only a subset of relevant results. In current systems, this is being alleviated by using techniques of “query expansion”, in which user queries are replaced by a disjunction of queries, in which all search terms are multiplied by using synonyms in which word or multiword expressions shares the same meaning. These synonym sets have to be pre-defined and they also suffer mainly from incompleteness and from the fact that no proper weighting of them is applied.
The above mentioned drawbacks are eliminated by the present computer-implemented method of domain-specific full-text document search including indexing of documents set of steps and querying documents set of steps, whereas
during indexing of documents set of steps:
while during querying documents set of steps,
During the whole method there are three main processes involved: preparation of embeddings, indexing of a set of relevant documents, and querying of the indexed documents.
The novelty of the approach is in the following additions to existing methods of text processing: (a) dependency morphological and syntactic analysis (b) semantic role labeling (c) named entity recognition, (d) named entity linking and (e) mapping (conversion and replacement) of words, sentences and text segments (up to a document length) to embeddings. These additions will be interlinked with standard text processing components used previously in connection with full-text search (tokenization, sentence break detection, lemmatization). Indexing will be multidimensional, i.e., performed on the resulting embeddings on top of words, lemmas and the named and linked entities in the semantically analyzed structure. Search will be performed in this multidimensional space using the embeddings as descriptors and similarity metrics, producing weighted ranking of results.
In general, the invented method uses linguistic information in deep learning and the main inventive idea is the particular combination of text analysis modules listed above with embeddings computed in the semantic space of the analyzed text and used as descriptors for similarity search as an approximation of a continuous measure for soft matching of user queries to the (indexed) text.
The present method performs indexation of text document whether created originally as text or transcribed manually or automatically from speech, or scanned and processed by an OCR device and then allow for searching the text using short and/or long queries in natural language with high semantic accuracy.
For a training corpus T1 for example a collection such as a collection of documents in the Universal Dependencies collection (Straka, 2016) is used.
Semantic analysis is performed resulting in a semantic structure of every segment of the text, in terms of nested predicate structures based on linguistic properties of words of any origin like verbs, nouns, adjectives, with variable number of arguments. The corpus T2 contains manually prepared data, for example treebanks with semantic annotation, such as (Hajič et al, 2006). Semantic analysis is performed using neural networks such as Dozat et al. (2018).
Ontology entry is for example based on the collection of texts as Wikipedia, DBpedia, medical ontologies such as ICD, domain-based ontologies, etc. It uses technology such as (Taufer, 2016) which works universally without the need of supervised training. Predicates are linked to semantic classes consisting of multilingual sets of normalized predicates reverse linked to possible lemmas and/or multiword units, with their grammatical properties (Urešová et al, 2018).
Corpus R3 as a result of semantic analysis is fully grounded by using URI, universal resource identifiers contained in nodes of a structure of nested predicates. No training is necessary nor combination of R1 and R2 corpora, it is performed as straightforward substitution of entities from R3 to graphs in R2.
Semantic entity embeddings are vectors of real numbers. They are created from the corpus R4 using known techniques as described in Mikolov et al. (2013). Set of tables E5 is created as a mapping from a text unit to such a vector. The mapping is computed either directly (stored in the form of a table), or implemented by a trained artificial neural network, which behaves as a mapping function, and it will be referred in the subsequent description simply as “embeddings (mapping) table”. The following combinations text units are used for creating the embeddings, and stored in E5:
All of the above is also computed from a sequence of these units as found in the analyzed document, in which case an artificial neural network is used as the mapping function, formally behaving as a table lookup performing mapping from a full document to a set embeddings. These mappings are considered part of E5.
Text positions of embeddings are occurrences of the individual text units referred to.
Multidimensional search methods are for example those as described in Nalepa et al. (2018). Returned documents are pruned to a predefined number of outputs desired by the user or user interface.
As an example of the advantage of the method as described above, two cases are described below: (a) a case C1 where current methods fail to discover a document (or occurrence of a term in a document), called a recall error and (b) a case C2 in which a false positive is found and a document found and returned to user as relevant when in fact it is not, called a precision error.
In the case C1, lack of synonym incorporation is the cause which the invented method of approximation of semantic distance between words, lemmas and entities solves. Consider the terms (multiword expressions) “repair shop” and “service facility”; if a document contains one, then even the standard method of lemmatization indexing will not find documents containing only or mostly the other term, since its relevance will not rank high or it will not be found at all. However, embeddings computed on large amounts of documents will convert both terms to vectors which will be close together in the measure using similarity measure between their descriptors represented by the embeddings; embeddings in general display this property if as (Mikolov et al. 2013) or (Sugathadasa et al., 2018) have shown, but for search, the positive effect will be amplified because the method according to this invention computes them in the semantic context by using also named and grounded entities.
In the case C2, homonymy is the problem that causes current systems make precision errors. For example, the word “table” might signify either a physical object that is used, e.g. for sitting at, but also an abstract mathematical or computer science-related object, e.g. a spreadsheet or part of a relational database, or other abstract entities—Wikipedia currently distinguishes nine different meanings of the word. Since the proposed processing pipeline contains the named entity linking component, which distinguishes these meanings, the document will be indexed properly by such an embedding that will correspond to the proper meaning and not to the other ones; supposing the query will be disambiguated in the same way, the match will be semantically coherent. However, even if the Named Entity Linked module is not perfect, the embeddings, i.e. vector representations of the entities, since it will be a concatenation also of the embeddings of the plain words and lemmas, makes sure the distance to the other meanings will not prevent relevant documents to be ranked high even if the Named Entity Linking component makes an error. This “soft fail” mechanism is an inherent property of embeddings and will be transferred by the proposed processing pipeline into the similarity search, keeping both precision and recall high (or at least higher than current search methods for full-text search which only use hard indexing by lemmas or similar even multiword entities).
The device embodying the invention will consist of a computer, on which three above mentioned software components will be implemented. Each component consists of a series of modules implementing the individual sets of steps, as described above and depicted in the Figures below.
The attached schemes serve to illustrate the invention, where
Following three main processes involved in the present method are demonstrated in the enclosed schemes.
In a preferred embodiment scenario, the following concrete implementation pipelines (sequences of processing modules) are used. The referenced modules are assumed to already contain all necessary models in order to perform the respective step; these models are either available with the individual components directly, or they can be trained (learned from data, for example for a different language or domain) in a way described also with the individual components through the references.
1. In the creation of embeddings set of steps (
Embeddings are created in the final step. First, the following data streams are created by extraction from R4, based on the annotation attributes: word sequence, lemma sequence, sequence of typed named entities and sequence of grounded entities. These sequences are then fed to an embedding-creating subsystem, which is implemented by a Deep Artificial Neural network, as described in (Mikolov et al., 2013), where “word” is replaced by the respective units (words, lemmas, NEs, grounded NEs) in the four data streams. The result is E5, embeddings tables mapping the four types of units into real-valued vectors of a predefined length (as described in (Mikolov et al., 2013)).
2. In the document indexing series of steps (
“A2T::CS::MarkEdgesToCollapse” module, as described at http://lindat.mff.cuni.cz/services/treex-web/run. For the Named Entity Recognition and Linking steps, two successive sub-steps are required: first, a named entity module must process the result of the semantic analysis module (DiR2) and identify thus spans of named entities and assign them a type; for this purpose, NameTag (https://lindat.mff.cuni.cz/en/services#NameTag) is used. Its output is then fed directly to a Named Entity Linking (grounding) sub-step, which is implemented by (Taufer, 2016), and results in DiR3. DiR4 is then produced by simply merging DiR2 and DiR3 based on the position of the individual words in the text by using stand-off annotation, which is a standard technique that is applied for text annotation.
All the four attributes of the resulting annotation in DiR4, namely words, lemmas, named entities and grounded entities are then mapped to embeddings using the corresponding table from E5. These entities are then associated with the document Di in its (inverted) index X, and to each embedding a position in the document is attached for targeted display to user at query time if the document is selected. In addition, the embeddings (concatenated to form a single vector) for a given position in a document are taken as descriptors for the similarity search procedure according to (Nalepa et al., 2018) and processed to create the necessary indexing structures for search at query time.
Additional document or a set of documents may be added to the index X at any time by following all the steps described here and in
3. At query time, the user enters a query Q in the form of text (or a spoken query is transcribed to a text by some automatic speech recognition module (not included in the
All the four attributes of the resulting annotation in QR4, namely words, lemmas, named entities and grounded entities are then mapped to embeddings using the corresponding table from E5, forming a set of embeddings to be used as descriptor in the similarity search procedure as described in (Napela et al., 2018).
The similarity search procedure (Napela et al., 2018) against the set of documents Di as indexed in X using the embeddings extracted from the query by the above procedure as descriptors for the similarity search results in a set of documents Dj and a set of positions {pjx} within each such document, ranked by similarity. These documents are displayed to the user originally posing the query Q in a compact form, with a reference to the full document (and a position in it).