A portion of the disclosure of this patent document contains material, which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present invention generally relates to the field of extracting information and table understanding. More specifically, the present invention relates to techniques of extracting information from structured textual data and constructing logic structures from semi-structure textual data, within complex table layouts.
Tables are convenient way to represent information in a structured format and are suitable for establishing and presenting relational data. Visually rich documents are very common in daily life. Examples include purchase receipts, insurance policy documents, and custom declaration forms and so on. In the documents, visual and layout information is critical for document understanding.
Table recognition is a technique for extracting meaningful information from tables in electronic and physical documents, such as financial documents, receipts, invoices, or quotation which can then be converted into editable data stored. Table segmentation can construct one-to-one corresponding relationships that may convert the table into machine-understandable knowledge. For example, by the table recognition, a document having a table format can be scanned, text-recognized, and converted into electronic data to be stored in a searchable database. This technology is important to expand table utilization, enabling users to rapidly and accurately search and extract key data from tables.
However, in some practical cases, table recognition is challenged in precise extraction when faced with a table layout that is complex, such as nested rows/columns or overlap rows/columns in the table. That is, existing table recognition technologies can recognize textual information in the tables but not the actual table structure. In general, table recognition for heterogeneous documents is challenging due to the wide variety of table layouts. Therefore, there is a need in the art for a high-accuracy approach for table recognition to extract information from various table layouts.
The present invention provides a method and an apparatus for extracting information from an image-based content presented in a structured layout. A structured layout is for texts to be distributed on a page of a document with certain arrangements, such as a table. In accordance with one aspect of the present invention, a method for extracting information from a table to process table recognition comprises the processing steps as follows. Characters of a table are extracted from an electronic or physical document by a character classifier. The characters with two-dimensional positions thereof are merged into n-gram characters by character classifier. The n-gram characters are merged into words and text lines by a multi-task graph neural network (GNN) with a two-stage GNN mode. The two-stage GNN mode execution comprises processing steps including: spatial features, semantic features, and convolution neural network (CNN) image features extraction from a target source; a first GNN stage to generate graph embedding spatial features from the extracted spatial features; and a second GNN stage to generate graph embedding semantic features and graph embedding CNN image features from the extracted semantic features and the extracted CNN image features, respectively. The results are that text lines are merged into cells; the cells are grouped into rows, columns, and key-value pairs; so as to obtain a row relationship among the cells, a column relationship among the cells, and a key-value relationship among the cells. Also, one or more adjacency matrices are generated in response to the row, column, and key-value relationships among the cells.
In one embodiment, the method further comprises: generating content of the table in a form of editable electronic data according to the adjacency matrices; and preserving the content of the table into extensible markup language (XML).
In accordance with another aspect of the present invention, an apparatus for extracting information from a table to process table recognition comprises a character classifier and a multi-task GNN. The character classifier, having an optical character reader (OCR) engine, is configured to extract one or more characters of a table from an electronic or physical document. The character classifier is configured to merge the characters with one or more two-dimensional positions thereof into n-gram characters. The multi-task GNN with a two-stage GNN mode is trained and configured to extract spatial features, semantic features, and convolution neural network (CNN) image features from a target source. In a first GNN stage, the GNN generates graph embedding spatial features from the extracted spatial features. In a second GNN stage, the GNN generates graph embedding semantic features and graph embedding CNN image features from the extracted semantic features and the extracted CNN image features, respectively. The GNN is further configured to: merge the text lines into cells; group the cells into rows, columns, and key-value pairs, so as to obtain a row relationship among the cells, a column relationship among the cells, and a key-value relationship among the cells; and generate adjacency matrices in response to the row, column, and key-value relationships among the cells.
The advantages of the present invention include: (1) In the two-stage GNN mode, the second GNN stage follows the first GNN stage, such that the first weight matrix for the semantic features and the second weight matrix for the CNN image features can be separated from each other, thereby allowing the processing of the semantic and CNN image features preventing them exerting any influence on each other. (2) The grouping the cells is executed based on the semantic features thereof. As such, when the table recognition is faced with a case of segmenting a table having complex layout, accuracy of the grouping of cells of the table can be maintained by employing the semantic features of the cells. (3) Information is extracted from the table with correct reading order, and content of table can be extracted as the structured data and preserved in XML format, which will be advantageous to constructing indexes to help search and providing quantitative data.
Embodiments of the invention are described in more detail hereinafter with reference to the drawings, in which:
In the following description, methods and apparatuses for extracting information from an image-based content in a structured layout, and the likes are set forth as preferred examples. It will be apparent to those skilled in the art that modifications, including additions and/or substitutions may be made without departing from the scope and spirit of the invention. Specific details may be omitted so as not to obscure the invention; however, the disclosure is written to enable one skilled in the art to practice the teachings herein without undue experimentation.
The present invention provides a method and an apparatus for image-based structured layout content recognition, which can convert structured layout information of an electronic or physical document into editable electronic data and then store the editable electronic data. A structured layout is for texts to be distributed on a page of a document with certain arrangements, such as a table. In accordance with one embodiment of the present invention, an image-based table content recognition method is executed by at least two logical components: a character classifier and a multi-task GNN. An ordinarily skilled person in the art may easily envision and realize the logical components by implementing in software, firmware, and/or machine instructions executable in one or more computer processors, specially configured processors, or combinations thereof.
In accordance with one embodiment, the character classifier is a natural language processing (NLP) based character classifier for language character recognition. At design time, the character classifier is trained with a training data set containing characters of a selected language. For example, in the case where English is selected language, the training data set may contain characters A-Z and a-z. During training, images of different handwriting style/form or images of print writing with different fonts of each character of a useable number (e.g. 100 images per character) is fed to the character classifier, such that the training of the character classifier constructs a character feature database, so as to make the character classifier recognize the characters of the selected language. In various embodiments, the character classifier is constructed based on a neural network, such as convolutional neural network (CNN). In various embodiments, the character classifier also comprises using an OCR engine for performing conversion of images of typed, handwritten, or printed characters into machine codes. In still other embodiments, the number of process steps in the methods may be performed by one or more classifiers of various types and/or implementations made suitable to perform the tasks in the process steps.
In general, a GNN is a connectionist model that can capture the dependence of graphs via message passing between nodes of graphs and can update the hidden states of its nodes by a weighted sum of states of their neighborhood, so as to learn the distribution of large experimental data. Accordingly, GNNs are able to model a relationship between nodes in a graph and produce a numeric representation of it. One of the reasons for choosing the GNNs is that there are many readily available real-world data that can be represented in graphical form.
The spatial feature 12 represents geometric features of the text bounding box, such as coordinates, height, width, and height width ratio (a.k.a. aspect ratio); the semantic feature 14 represents n-gram character embedding, word embedding, or text line embedding from a pretrained database (e.g. millions of raw data and text documents); and the CNN image feature 16 represents CNN/image features of the mid-point of the text bounding box, which may contain information of font size, font type, and explicit separator.
In one embodiment, the GNN is separated into three sub networks: a first GNN 24, a second GNN 30, and a third GNN 32. In another embodiment, the GNN is configured differently at different processing step or stage such that the differently configured GNN are labeled: a first GNN 24, a second GNN 30, and a third GNN 32. In the first GNN stage 20, the spatial features 12 is input into the first GNN 24, such that graph embedding spatial features, a first weight matrix for the semantic features 26, and a second weight matrix for the CNN image features 28 can be output from the first GNN 24.
In the second GNN stage 22, processing the semantic and CNN image features 12 and 14 is in a parallel manner. That is, the semantic features 12 and the CNN image features 14 may be fed to different GNNs. As shown in
In the two-stage GNN mode, the second GNN stage 22 is executed after the generation of the first weight matrix for the semantic features 26 and the second weight matrix for the CNN image features 28. As such, the first weight matrix for the semantic features 26 and the second weight matrix for the CNN image features 28 can be separated out, thereby further processing the semantic and CNN image features 12 and 14 with prevention of them exerting any influence on each other.
After the second GNN stage 22, in addition to the spatial, semantic, and CNN image features 12, 14, and 16 obtained prior to the first and second GNN stages 20 and 22, the graph embedding spatial features, the graph embedding semantic features, and the graph embedding CNN image features are further obtained. More specifically, compared with sequential modeling, GNN can learn the importance among text blocks more flexibly and precisely. The degree of importance among text blocks is used to generate text block representation that incorporates the context. Briefly, by processing the spatial, semantic, and CNN image features 12, 14, and 16 in the two-stage GNN mode, these features 12, 14, and 16 can be integrated to output the respective graph embedding features, which will be advantageous to accurately recognize a table content.
The following further describes the workflow for the table content recognition. Referring to
In S10, an image of a table in an electronic or physical document is captured. In various embodiments, the table-recognition system 100 may further include an optical scanner 102 electrically coupled to the character classifier 110 and the GNN 120, so as to capture the image and transmit it to either the character classifier 110 or the GNN 120. To illustrate, a table image 200 shown in
After capturing the table image, the method continues with S20. In S20, the image is transmitted to the character classifier 110 for character extraction. The character classifier 110 obtains the extracted information from characters in the table image 200. Specifically, the extracted information may include text, and coordination for each of the characters. In various embodiments, the character classifier 110 extracts information via OCR with a predetermining language. For example, an OCR engine for English can be selected. According to the exemplary table image 200 shown in
After obtaining the extraction information, the method continues with S30. In S30, the extracted characters with their two-dimensional positions (i.e. the coordinates thereof) are merged into n-gram characters. For example, in response to the exemplary table image 200 shown in
Referring to
In step S44, the n-gram-characters spatial features, semantic features, and CNN image features 212, 214, and 216 are processed by the GNN through a two-stage GNN mode, thereby integrating them into n-gram-characters graph embedding spatial features, semantic features, and CNN image features.
The graph embedding features are used to serve as merging materials to obtain words 220 of the table image. In response to the exemplary table image 200 shown in
Then, continuing with step S46, the words 220 are merged into the text lines 224 by the GNN with the two-stage GNN mode. In one embodiment, a text line probability matrix is introduced into the merging to serve as a weight matrix for obtaining the merging result to the text lines 224. Similarly, the text line probability matrix acts as an adjacency matrix for the words 220, and the text lines 224 are the “argmax set” of the text line probability matrix. To obtain the merging result to the text lines 224, cliques of an adjacency matrix for the words 220 are found, and the words in each clique are merged to “a text line”. In response to the exemplary table image 200 shown in
Referring to
In S54, the text line spatial features, semantic features, and CNN image features 230, 232, and 234 are processed by the GNN through a two-stage GNN mode, thereby integrating them into text line graph embedding spatial features, semantic features, and CNN image features. Herein, the two-stage GNN mode is the same as the descriptions to
Next, these graph embedding features are used to serve as merging materials for the cells 240, wherein each “cell” has meaningful sets of characters and/or words and form elements of the table. In response to the exemplary table image 200 shown in
Then, after obtaining the cells 240, the method continues with S60 for grouping the cells into rows, columns, and key-value pairs. As shown in
The grouping the cells 240 is executed based on the semantic features thereof. The reason for being based on the semantic features is that no matter how table layout changes, semantic is coherent within a cell, and semantic is similar within a column or row. As such, when the table recognition faces a case for segmenting a table having complex layout (e.g. nested row, nested column, overlap column, or irregular format), reducing accuracy of grouping cells of the table can be avoided by employing the semantic features of the text lines. Moreover, for the case of a table having a row span several columns or a column span several rows, considering the semantic features of the text lines can avoid low accuracy also.
In various embodiments, row, column, key-value pair probability matrices are introduced into the grouping to serve as weight matrices for obtaining the grouping results to the rows 250, the columns 252, and the key-value pairs 254, respectively. Similarly, these probability matrices act as adjacency matrices for the cells 240, and the rows 250, the columns 252, and the key-value pairs 254 are the “argmax sets” of the corresponding probability matrices, respectively. To obtain the merging result to the rows 250, the columns 252, or the key-value pairs 254, cliques of the corresponding adjacency matrix for the cells 240 are found, and the text lines in each clique are merged into “a row”, “a column”, or “a key-value pair”. In response to the exemplary table image 200 shown in
Thereafter, according to the obtained rows 250, columns 252, and key-value pairs 254, a row relationship among the cells, a column relationship among the cells, and a key-value relationship among the cells can be determined and obtained. To illustrate,
Referring to
S80 follows S70, in which S80 is preserving a structured data. In S80, according to the adjacency matrices, the table layout can be identified by the GNN 120, such that the GNN 120 can generate content of the table in a form of editable electronic data. Specifically, the statement “the table layout can be identified by the GNN 120” means the GNN 120 can extract information from the table with correct reading order. As such, the generated content of the table may include at least one data set having a key and at least one value, in which the value can match the key. Herein, the phrase “the value can match the key” means the value is linked to the key based on the image features, semantic features, and/or spatial features. In the end of S80, by the afore-described features and adjacency matrices, the content of table can be extracted as the structured data and preserved in WL, which will be advantageous to constructing indexes to help search and providing quantitative data.
The electronic embodiments disclosed herein may be implemented using computing devices, computer processors, or electronic circuitries including but not limited to application specific integrated circuits (ASIC), field programmable gate arrays (FPGA), and other programmable logic devices configured or programmed according to the teachings of the present disclosure. Computer instructions or software codes running in the computing devices, computer processors, or programmable logic devices can readily be prepared by practitioners skilled in the software or electronic art based on the teachings of the present disclosure.
All or portions of the electronic embodiments may be executed in one or more computing devices including server computers, personal computers, laptop computers, mobile computing devices such as smartphones and tablet computers.
The electronic embodiments include computer storage media having computer instructions or software codes stored therein which can be used to program computers or microprocessors to perform any of the processes of the present invention. The storage media can include, but are not limited to, floppy disks, optical discs, Blu-ray Disc, DVD, CD-ROMs, and magneto-optical disks, ROMs, RAMs, flash memory devices, or any type of media or devices suitable for storing instructions, codes, and/or data.
Various embodiments of the present invention also may be implemented in distributed computing environments and/or Cloud computing environments, wherein the whole or portions of machine instructions are executed in distributed fashion by one or more processing devices interconnected by a communication network, such as an intranet, Wide Area Network (WAN), Local Area Network (LAN), the Internet, and other forms of data transmission medium.
The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art.
The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated.