This application claims priority to Chinese Patent Application No. 202011501307.1, filed with the China National Intellectual Property Administration (CNIPA) on Dec. 18, 2020, the contents of which are incorporated herein by reference in their entirety.
Embodiments of the present disclosure relate to the field of computer technology, particularly to the field of artificial intelligence technology such as computer vision, natural language processing and deep learning, and more particularly to a method, apparatus and device for recognizing a bill, and relates to a computer readable storage medium.
A bill is an important text carrier for carrying structured information, and is widely used in various business scenarios. For different types of bills, the layouts thereof may be complicated, and the items thereon are numerous and diverse. In addition, a large number of bills are used for reimbursement and review every day, resulting in a high labor cost and a low accounting efficiency.
At present, the structured information may be extracted from the bills by the following approaches: 1) a concerned field is classified and positioned by means of a detection, to obtain the structured information; or 2) the structured information is obtained by performing a named entity recognition (NER) or a relationship connection on the text information obtained by analyzing an optical character recognition (OCR)
Embodiments of the present disclosure provide a method, apparatus and device for recognizing a bill. Embodiments of the present disclosure also provide a computer readable storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for recognizing a bill, comprising: acquiring a bill image; inputting the bill image into a feature extraction network layer of a pre-trained bill recognition model, to obtain a bill key field feature map and a bill key field value feature map of the bill image; inputting the bill key field feature map into a first head network layer of the bill recognition model, to obtain a bill key field; processing the bill key field value feature map by using a second head network layer of the bill recognition model, to obtain a bill key field value, wherein the feature extraction network layer being respectively connected with the first head network layer and the second head network layer; and generating structured information of the bill image based on the bill key field and the bill key field value.
In a second aspect, an embodiment of the present disclosure provides an apparatus for recognizing a bill, comprising: an information acquiring module, configured to acquire a bill image; a first obtaining module, configured to input the bill image into a feature extraction network layer of a pre-trained bill recognition model, to obtain a bill key field feature map and a bill key field value feature map of the bill image; a second obtaining module, configured to input the bill key field feature map into a first head network layer of the bill recognition model, to obtain a bill key field; a third obtaining module, configured to process the bill key field value feature map by using a second head network layer of the bill recognition model, to obtain a bill key field value, wherein the feature extraction network layer being respectively connected with the first head network layer and the second head network layer; and an information generating module, configured to generate structured information of the bill image based on the bill key field and the bill key field value.
In a third aspect, an embodiment of the present disclosure provides an electronic device, and the electronic device includes: at least one processor; and a memory communicatively connected with the at least one processor, where the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to execute the method for recognizing a bill as described in any one of the implementations of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a non-transitory computer readable storage medium storing computer instructions, where the computer instructions cause a computer to execute the method for recognizing a bill as described in any one of the implementations of the first aspect.
In a fifth aspect, an embodiment of the present disclosure provides a computer program product, comprising a computer program, wherein the computer program, when executed by a processor, implements the method for recognizing a bill as described in any one of the implementations of the first aspect.
According to the method, apparatus and device for recognizing a bill and the storage medium that are provided in the embodiments of the present disclosure, the bill image is first acquired. Afterwards, the bill image is inputted into the feature extraction network layer of the pre-trained bill recognition model, to obtain the bill key field feature map and the bill key field value feature map of the bill image. Next, the bill key field feature map is inputted into the first head network layer of the bill recognition model, to obtain the bill key field. Then, the bill key field value feature map is processed by using the second head network layer of the bill recognition model, to obtain the bill key field value, the feature extraction network layer being respectively connected with the first head network layer and the second head network layer. Finally, the structured information of the bill image is generated based on the bill key field and the bill key field value. According to the present disclosure, the bill key field feature map and the bill key field value feature map are obtained by using the feature extraction network layer of the bill recognition model. Then, based on the first head network layer and the second head network layer of the bill recognition model, the recognition for the bill key field and the bill key field value included in the structured information in the bill image may be accurately implemented.
It should be understood that the content described in this part is not intended to identify key or important features of the embodiments of the present disclosure, and is not used to limit the scope of the present disclosure. Other features of the present disclosure will be easily understood through the following description.
After reading detailed descriptions for non-limiting embodiments given with reference to the following accompanying drawings, other features, objectives and advantages of the present disclosure will be more apparent. The accompanying drawings are used for a better understanding of the scheme, and do not constitute a limitation to the present disclosure. Here:
Exemplary embodiments of the present disclosure are described below in combination with the accompanying drawings, and various details of the embodiments of the present disclosure are included in the description to facilitate understanding, and should be considered as exemplary only. Accordingly, it should be recognized by one of ordinary skills in the art that various changes and modifications may be made to the embodiments described herein without departing from the scope and spirit of the present disclosure. Also, for clarity and conciseness, descriptions for well-known functions and structures are omitted in the following description.
It should be noted that the embodiments in the present disclosure and the features in the embodiments may be combined with each other on a non-conflict basis. The present disclosure will be described below in detail with reference to the accompanying drawings and in combination with the embodiments.
As shown in
A user may use the terminal devices 101, 102 and 103 to interact with the server 105 via the network 104 to receive or send a message (e.g., the terminal devices 101, 102 and 103 may acquire a trained bill recognition model from the server 105, or the server 105 may acquire a bill image from the terminal devices 101, 102 and 103) or the like. Various communication client applications (e.g., an image processing application) may be installed on the terminal devices 101, 102 and 103.
The terminal devices 101, 102 and 103 may input the bill image into a feature extraction network layer of a pre-trained bill recognition model, to obtain a bill key field feature map and a bill key field value feature map of the bill image. Then, the terminal devices 101, 102 and 103 may input the above bill key field feature map into a first head network layer of the above bill recognition model to obtain a bill key field. Finally, the terminal devices 101, 102 and 103 may process the bill key field value feature map by using a second head network layer of the bill recognition model, to obtain a bill key field value.
The terminal devices 101, 102 and 103 may be hardware or software. When being the hardware, the terminal devices 101, 102 and 103 may be various electronic devices supporting information interaction, the electronic devices including, but not limited to, a smartphone, a tablet computer, a laptop portable computer, a desktop computer, and the like. When being the software, the terminal devices 101, 102 and 103 may be installed in the above listed electronic devices. The terminal devices may be implemented as a plurality of pieces of software or a plurality of software modules, or as a single piece of software or a single software module, which will not be particularly defined here.
The server 105 may be a server providing various services. For example, the server 105 may be a backend server that recognizes a bill. The server 105 may first input the bill image into the feature extraction network layer of the pre-trained bill recognition model, to obtain the bill key field feature map and the bill key field value feature map of the bill image. Then, the server 105 may input the above bill key field feature map into the first head network layer of the above bill recognition model to obtain the bill key field. Finally, the server 105 may process the bill key field value feature map by using the second head network layer of the bill recognition model, to obtain the bill key field value.
It should be noted that the server 105 may be hardware or software. When being the hardware, the server 105 may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When being the software, the server 105 may be implemented as a plurality of pieces of software or a plurality of software modules (e.g., software or software modules for providing a distributed service), or may be implemented as a single piece of software or a single software module, which will not be particularly defined here.
It should be noted that the method for recognizing a bill provided in the embodiments of the present disclosure may be performed by the terminal devices 101, 102 and 103, or may be performed by the server 105.
It should also be noted that the terminal devices 101, 102 and 103 may locally store the trained bill recognition model. At this time, the network 104 and the server 105 may not be provided in the exemplary system architecture 100.
It should also be noted that the server 105 may also locally store the bill image, and may acquire the bill image locally. At this time, the terminal devices 101, 102 and 103 and the network 104 may not be provided in the exemplary system architecture 100.
It should be appreciated that the numbers of the terminal devices, the networks, and the servers in
Further referring to
Step 201, acquiring a bill image.
In this embodiment, an executing body (e.g., the server 105 or the terminal devices 101, 102 and 103 shown in
The above paper bill may include a medical bill, a tax invoice, a traffic bill, and the like. For example, the medical bill includes at least information of a person in treatment, for example, a personal name and an identity number, and may further include information of a treatment, for example, a treatment date and a treatment hospital.
Step 202, inputting the bill image into a feature extraction network layer of a pre-trained bill recognition model, to obtain a bill key field feature map and a bill key field value feature map of the bill image.
In this embodiment, the above executing body may input the bill image into the feature extraction network layer of the pre-trained bill recognition model, to obtain the bill key field feature map and the bill key field value feature map of the bill image. The above bill key field feature map may be a feature map including all features in the bill image that are related to a bill key field feature. The above bill key field value feature map may be a feature map including all features in the bill image that are related to a bill key field value feature. The above bill recognition model may include a deep learning network (DLN), for example, a convolutional neural network (CNN). Here, the above bill recognition model may generally include the feature extraction network layer, a first head network layer, and a second head network layer. The above feature extraction network layer may be used to extract, from the above bill image, the bill key field feature map and the bill key field value feature map of the bill image.
Step 203, inputting the bill key field feature map into a first head network layer of the bill recognition model to obtain a bill key field.
In this embodiment, the above executing body may input the bill key field feature map into the first head network layer of the above bill recognition model, to obtain the bill key field in the bill image. The above first head network layer may be used to determine the bill key field according to the bill key field feature map.
In a particular example, the first head network layer may include a plurality of convolutional layers. The bill key field feature map is inputted into the first head network layer to obtain a geometric map and a confidence score map. Then, the bill key field is determined based on the geometric map (geo map) and the confidence score map (score map).
Step 204, processing the bill key field value feature map by using a second head network layer of the bill recognition model, to obtain a bill key field value, the feature extraction network layer being respectively connected with the first head network layer and the second head network layer.
In this embodiment, the above executing body may input the bill key field value feature map into the second head network layer of the above bill recognition model to, obtain the bill key field value. The above second head network layer may be used to determine the bill key field value according to the bill key field value feature map.
In a particular example, the second head network layer may include a plurality of convolutional layers. The bill key field value feature map is inputted into the second head network layer, to obtain a geometric map and a confidence score map. Then, the bill key field value is determined based on the geometric map (geo map) and the confidence score map (score map).
It should be noted that, as compared with a detection for the bill key field that is performed using a single head network, the classification accuracy is improved by 2.5% and the detection accuracy is improved by 2% through the recognition for the bill key field and the bill key field value that is performed using the first head network layer and the second head network layer.
Step 205, generating structured information of the bill image based on the bill key field and the bill key field value.
In this embodiment, the above executing body may position the bill key field and the bill key field value in the bill image through an optical character recognition (OCR), and then may generate the structured information of the bill image according to the bill key field and the bill key field value.
In a particular example, the position of the bill key field corresponding to “Name” in the bill image is A1, the position of the bill key field value corresponding to “Xiaowang” in the bill image is A2, the position of the bill key field corresponding to “Age” in the bill image is B1, and the position of the bill key field value corresponding to “18 years old” in the bill image is B2. Then, the above positions are respectively compared with a preset position, to determine that the bill key field corresponding to “Name” and the bill key field value corresponding to “Xiaowang” constitute a key value pair, and the bill key field corresponding to “Age” and the bill key field value corresponding to “18 years old” constitute a key value pair. Accordingly, the structured information of the bill image is generated.
In this embodiment, the above executing body may also generate the structured information of the bill image according to a preset bill key field and a preset bill key field value.
In a particular example, an image of a medical bill is taken as an example. The preset bill key field refers to “Treatment category,” and the preset bill key field value refers to “On-the-job treatment,” “Self-payment,” or the like. The preset bill key field refers to “Hospital,” and the preset bill key field value refers to “** Hospital.” Based on the above bill recognition model, it is recognized that the bill key field is “Hospital” and the bill key field value is “Hospital A.” Then, the above executing body compares the recognized bill key field “Hospital” and the recognized bill key field value “Hospital A” with the above preset bill key field and the above preset bill key field value, to determine that “Hospital” and “Hospital A” are a key value pair, thus implementing the generation for the structured information of the bill image.
According to the method and apparatus for recognizing a bill, the device and the storage medium that are provided in the embodiments of the present disclosure, the bill image is first acquired. Afterwards, the bill image is inputted into the feature extraction network layer of the pre-trained bill recognition model, to obtain the bill key field feature map and the bill key field value feature map of the bill image. Next, the bill key field feature map is inputted into the first head network layer of the bill recognition model to obtain the bill key field. Then, the bill key field value feature map is processed by using the second head network layer of the bill recognition model to obtain the bill key field value, the feature extraction network layer being respectively connected with the first head network layer and the second head network layer. Finally, the structured information of the bill image is generated based on the bill key field and the bill key field value. According to the present disclosure, the feature extraction network layer of the bill recognition model is used to obtain the bill key field feature map and the bill key field value feature map. Then, based on the first head network layer and the second head network layer of the bill recognition model, the recognition for the bill key field and the bill key field value included in the structured information in the bill image may be accurately implemented.
In some alternative implementations of this embodiment, the feature extraction network layer includes a backbone network layer and a feature pyramid network layer.
In this implementation, the above executing body may use the backbone network layer to perform a feature extraction on the bill image, and then, use the feature pyramid network (FPN) layer to extract the feature extracted through the backbone network layer, to obtain the bill key field feature map and the bill key field value feature map.
In a particular example, in
The backbone network layer may include, but not limited to, a ResNeSt-200 network and a ResNeSt-50 network. The feature pyramid network layer may include, but not limited to, an n-layer (n is a positive integer) feature pyramid network layer. For example, the bill image is processed using the n-layer feature pyramid network layer, and thus, feature maps of n resolutions may be obtained.
It should be noted that the n feature maps outputted by the n-layer feature pyramid network layer decrease in resolution from the lower order to the higher order, and in the drawing, are similar in shape to a pyramid. In these feature maps, the features outputted by the backbone network layer are extracted at different scales and levels.
In this implementation, the extraction for the bill key field feature map and the bill key field value feature map in the bill image is implemented through the backbone network layer and the feature pyramid network layer.
In some alternative implementations of this embodiment, the bill recognition model further includes a first convolutional layer. The feature extraction network layer, the first convolutional layer and the first head network layer are connected in sequence.
In this implementation, the bill recognition model may further include the first convolutional layer. A number of first convolutional layers may be one or more.
In a particular example, when the number of the first convolutional layers is more than one, a plurality of neurons may be provided on the first convolutional layers. The input of each neuron is connected with the local receptive field of a previous convolutional layer. A convolution operation is performed on the data of the local receptive field of the previous convolutional layer, to extract a feature of the local receptive field. Once the feature of the local receptive field is extracted, the positional relationship between the feature and an other feature is determined accordingly. Then, feature mapping is performed to obtain feature information, and the feature information is outputted to a next convolutional layer to proceed with similar processes.
In this implementation, a convolution operation may be performed on the bill key field feature map through the first convolutional layer, to further enhance a mapping relationship between the bill key field feature map and the bill key field, thereby implementing the accurate recognition for the bill key field.
In some alternative implementations of this embodiment, the bill recognition model further includes a second convolutional layer. Here, the feature extraction network layer, the second convolutional layer and the second head network layer are connected in sequence.
In this implementation, the bill recognition module may further include the second convolutional layer. A number of second convolutional layers may be one or more.
In a particular example, when the number of the second convolutional layers is more than one, a plurality of neurons may be provided on the second convolutional layers. The input of each neuron is connected with the local receptive field of a previous convolutional layer. A convolution operation is performed on the data of the local receptive field of the previous convolutional layer, to extract a feature of the local receptive field. Once the feature of the local receptive field is extracted, the positional relationship between the feature and an other feature is determined accordingly. Then, feature mapping is performed to obtain feature information, and the feature information is outputted to a next convolutional layer to proceed with similar processes.
In this implementation, a convolution operation may be performed on the bill key field value feature map through the second convolutional layer, to further enhance a mapping relationship between the bill key field value feature map and the bill key field value, thereby implementing the accurate recognition for the bill key field value.
In some alternative implementations of this embodiment, the bill recognition model further includes the first convolutional layer and the second convolutional layer. Here, the feature extraction network layer, the second convolutional layer and the second head network layer are connected in sequence.
In this implementation, the bill recognition module may further include the first convolutional layer and the second convolutional layer. Here, a number of first convolutional layers and/or a number of second convolutional layers is one or more.
In this implementation, the convolution operation may be performed on the bill key field feature map through the first convolutional layer and the convolution operation may be performed on the bill key field value feature map through the second convolutional layer, to further enhance the mapping relationship between the bill key field feature map and the bill key field and the mapping relationship between the bill key field value feature map and the bill key field value, thereby implementing the accurate recognition for the bill key field and the bill key field value.
For ease of understanding, an application scenario in which the method for recognizing a bill according to the embodiment of the present disclosure may be implemented is provided below. As shown in
Further referring to
Step 501, acquiring a bill image.
Step 502, inputting the bill image into a feature extraction network layer of a pre-trained bill recognition model, to obtain a bill key field feature map and a bill key field value feature map of the bill image.
Step 503, inputting the bill key field feature map into a first head network layer of the bill recognition model to obtain a bill key field.
In this embodiment, the particular operations of steps 501-503 are described in detail in steps 201-203 in the embodiment shown in
Step 504, inputting the bill key field feature map and the bill key field value feature map into a feature synthesis network layer of the bill recognition model, to obtain a synthesized feature map.
In this embodiment, the above executing body may input the bill key field feature map and the bill key field value feature map into the feature synthesis network layer of the bill recognition model, to obtain the synthesized feature map.
In a particular example, it is assumed that the bill key field feature map and the bill key field value feature map are N-dimensional feature maps. A value corresponding to one dimension in the N-dimensional bill key field feature map and a value corresponding to one dimension in the N-dimensional bill key field value feature map are added together, a value corresponding to two dimensions in the N-dimensional bill key field feature map and a value corresponding to two dimensions in the N-dimensional bill key field value feature map are added together, and so on, and a value corresponding to N dimensions in the N-dimensional bill key field feature map and a value corresponding to N dimensions in the N-dimensional bill key field value feature map are added together until the N-dimensional synthesized feature map is obtained.
Step 505, inputting the synthesized feature map into a second head network layer of the bill recognition model to obtain a bill key field value, the feature extraction network layer being respectively connected with the feature synthesis network layer and the second head network layer, and the feature synthesis network layer being connected with a first head network layer.
In this embodiment, the above executing body may input the feature map synthesized in step 504 into the second head network layer of the bill recognition model to obtain the bill key field value. The above second head network layer may be used to determine the bill key field value according to the synthesized feature map.
Step 506, generating structured information of the bill image based on the bill key field and the bill key field value.
In this embodiment, the particular operations of step 506 are described in detail in step 205 in the embodiment shown in
It may be seen from
In some alternative implementations of this embodiment, the feature synthesis network layer includes an adder, the adder being connected with the feature extraction network layer, the feature synthesis network layer and the second head network layer.
In this implementation, the feature synthesis network layer may include, but not limited to, the adder and a feature synthesis model.
Here, when training the feature synthesis model, the above executing body may train an initial model by using the synthesized feature map as an input of the feature synthesis model and the tag corresponding to the input as an expected output, to obtain the feature synthesis model. Here, the initial model may include a deep learning network (DLN), for example, a convolutional neural network (CNN).
In a particular example, in
In this implementation, the bill key field feature map and the bill key field value feature map are synthesized through the adder, and thus, the accuracy of the recognition for the bill key field value is improved.
Further referring to
Step 701, acquiring a training sample set, a training sample in the training sample set including a sample bill image and a corresponding sample structured information tag.
In this embodiment, the executing body of the training step may be the same as or different from the executing body of the method for recognizing a bill. If the executing bodies are the same, after training and obtaining the bill recognition model, the executing body of the training step may locally store the model structure information of the trained bill recognition model and the parameter value of a model parameter. If the executing bodies are different, after training and obtaining the bill recognition model, the executing body of the training step may send the model structure information of the trained bill recognition model and the parameter value of the model parameter to the executing body of the method for recognizing a bill.
In this embodiment, the executing body of the training step may acquire the training sample set in various ways. As an example, a training sample set stored in a database server may be acquired from the database server by means of a wired connection or a wireless connection. As another example, a training sample set may be collected by a terminal. The training sample in the above training sample set includes the sample bill image and the corresponding sample structured information tag.
Step 702, training an initial model by using sample bill image as an input of a bill recognition model and using the sample structured information tag as an output of the bill recognition model to obtain the bill recognition model.
In this embodiment, after obtaining the structured information of the sample bill image and the sample structured information tag, the above executing body may train the initial model by using the structured information of the sample bill image and the sample structured information tag, to obtain the bill recognition model. During the training, the executing body may use the structured information of the sample bill image as the input of the bill recognition model and use the inputted corresponding sample structured information tag as an expected output, to obtain the bill recognition model. The above initial model may be a probability model, a classification model or an other classifier in the existing technology or a technology developed in the future. For example, the initial model may include any one of: an extreme gradient boosting tree model (XGBoost), a logistic regression model (LR), a deep neural network model (DNN), and a gradient boosting decision tree model (GBDT).
According to the method provided in the embodiment of the present disclosure, training is performed based on the sample bill image and the sample structured information tag to obtain the bill recognition model, thus implementing the accurate recognition for the bill key field and the bill key field value included in the structured information in the bill image.
In some alternative implementations of this embodiment, training the initial model by using the sample bill image as an input of the bill recognition model and using the sample structured information tag as the output of the bill recognition model to obtain the bill recognition model comprises: performing, for the training sample in the training sample set, following training: inputting the sample bill image of the training sample into a feature extraction network layer of the initial model to obtain a sample bill key field feature map and a sample bill key field value feature map of the sample bill image; inputting the sample bill key field feature map into a first head network layer of the bill recognition model to obtain a sample bill key field; processing the sample bill key field value feature map by using a second head network layer of the bill recognition model to obtain a sample bill key field value; generating the structured information of the sample bill image based on the sample bill key field and the sample bill key field value; determining a total loss function value based on the structured information of the sample bill image and the sample structured information tag; using the initial model as the bill recognition model in response to the total loss function value satisfying a target value; and continuing to perform the training in response to the total loss function value not satisfying the target value. A plurality of iterations are repeated until the bill recognition model is trained.
In this implementation, the executing body of the training step may input the training sample in the training sample set into the feature extraction network layer of the initial model. By detecting and analyzing the sample bill image of the training sample, the sample bill key field feature map and the sample bill key field value feature map of the sample bill image may be obtained. Here, the initial model generally includes the feature extraction network layer, a first head network layer and a second head network layer. The feature extraction network layer of the initial model may be used to extract, from the sample bill image, the sample bill key field feature map and the sample bill key field value feature map.
Here, the initial model may be various existing neural network models created based on machine learning techniques. The neural network models may have various existing neural network structures (e.g., a VGGNet (visual geometry group network), and a ResNet (residual neural network)).
In this implementation, the executing body of the training step may input the sample bill key field feature map into the first head network layer of the initial model, to obtain the sample bill key field. The first head network layer of the initial model may be used to obtain the sample bill key field according to the sample bill key field feature map.
In this implementation, the executing body of the training step may input the sample bill key field value feature map into the second head network layer of the initial model to obtain the sample bill key field value. The second head network layer of the initial model may be used to obtain the sample bill key field value according to the sample bill key field value feature map.
In this implementation, the executing body of the training step may determine the total loss function value based on the structured information of the sample bill image and the sample structured information tag.
In this embodiment, a loss function is generally used to measure the degree of inconsistency between a predicted value of the model and a true value (e.g., a key value pair tag). In general, the smaller a loss function value is, the better the robustness of the model is. The loss function may be set according to actual requirements. For example, the above loss function may include a cross entropy loss function.
In this implementation, the executing body of the training step may compare the total loss function value with a preset target value, and determine whether the training for the initial model is completed according to the comparison result. If the total loss function value satisfies the preset target value, the executing body of the training step may determine the initial model as the bill recognition model. The above target value may generally be used to indicate the degree of inconsistency between the predicted value and the true value. That is, when the total loss function value reaches the target value, it may be considered that the predicted value is close or approximate to the true value. The target value may be set according to actual requirements.
In this embodiment, the executing body of the training step may continue to perform the training when the total loss function value does not satisfy the target value.
In this implementation, the bill key field feature map and the bill key field value feature map are obtained using the feature extraction network layer of the initial model. Afterwards, the sample bill key field and the sample bill key field value of the sample bill image are obtained based on the first head network layer and the second head network layer of the initial model. Next, the structured information of the sample bill image is generated based on the sample bill key field and the sample bill key field value. Then, the total loss function value is determined based on the structured information of the sample bill image and the sample structured information tag. Finally, the training for the initial model is implemented based on the total loss function value and the target value, to obtain the bill recognition model. Thus, the accurate recognition for the bill key field and the bill key field value included in the structured information in the bill image is implemented.
Here, the executing body of the training step may store the generated bill recognition model locally, or may send the generated bill recognition model to an other electronic device.
According to the method provided in the above embodiment of the present disclosure, whether the training for the initial model is completed is determined through the comparison result between the total loss function value and the target value. When the total loss function value reaches the target value, it may be considered that the predicted value is close or approximate to the true value. At this time, the initial model may be determined as the bill recognition model. The model generated in this way has a high robustness.
Further referring to
As shown in
In this embodiment, for particular processes of the information acquiring module 801, the first obtaining module 802, the second obtaining module 803, the third obtaining module 804 and the information generating module 805 in the apparatus 800 for recognizing a bill, and their technical effects, reference may be respectively made to relative descriptions of steps 201-205 in the corresponding embodiment of
In some alternative implementations of this embodiment, the apparatus for recognizing a bill further includes: a feature synthesizing module, configured to input the bill key field feature map and the bill key field value feature map into a feature synthesis network layer of the bill recognition model, to obtain a synthesized feature map. The third obtaining module 804 is further configured to: input the synthesized feature map into the second head network layer of the bill recognition model to obtain the bill key field value, the feature extraction network layer being respectively connected with the feature synthesis network layer and the second head network layer, and the feature synthesis network layer being connected with the first head network layer.
In some alternative implementations of this embodiment, the feature extraction network layer includes a backbone network layer and a feature pyramid network layer.
In some alternative implementations of this embodiment, the bill recognition model further includes a first convolutional layer, and the feature extraction network layer, the first convolutional layer and the first head network layer are connected in sequence.
In some alternative implementations of this embodiment, the bill recognition model further includes a second convolutional layer, and the feature extraction network layer, the second convolutional layer and the second head network layer are connected in sequence.
In some alternative implementations of this embodiment, the feature synthesis network layer includes an adder, the adder being connected with the feature extraction network layer, the feature synthesis network layer and the second head network layer.
In some alternative implementations of this embodiment, the apparatus for recognizing a bill further includes: a sample acquiring module (not shown), configured to acquire a training sample set, a training sample in the training sample set including a sample bill image and a corresponding sample structured information tag; and a model training module (not shown), configured to train an initial model by using the sample bill image as an input of the bill recognition model and using the sample structured information tag as an output of the bill recognition model, to obtain the bill recognition model.
In some alternative implementations of this embodiment, the model training module is further configured to: perform, for the training sample in the training sample set, following training: inputting the sample bill image of the training sample into a feature extraction network layer of the initial model, to obtain a sample bill key field feature map and a sample bill key field value feature map of the sample bill image; inputting the sample bill key field feature map into the first head network layer of the bill recognition model, to obtain a sample bill key field; processing the sample bill key field value feature map by using the second head network layer of the bill recognition model, to obtain a sample bill key field value; generating structured information of the sample bill image based on the bill key field and the bill key field value; determining a total loss function value based on the structured information and the sample structured information tag; using the initial model as the bill recognition model in response to the total loss function value satisfying a target value; and continuing to perform the training in response to the total loss function value not satisfying the target value.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device, a readable storage medium and computer program product.
As shown in
As shown in
As shown in
The memory 902 is a non-transitory computer readable storage medium provided by the present disclosure. The memory stores instructions executable by at least one processor, so that the at least one processor performs the method for recognizing a bill provided by the present disclosure. The non-transitory computer readable storage medium of the present disclosure stores computer instructions for causing a computer to perform the method for recognizing a bill provided by the present disclosure.
The memory 902, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs and modules, such as program instructions/modules corresponding to the method for recognizing a bill in the embodiments of the present disclosure (for example, the information acquiring module 801, the first obtaining module 802, the second obtaining module 803, the third obtaining module 804 and the information generating module 805 shown in
The memory 902 may include a storage program area and a storage data area, where the storage program area may store an operating system and at least one function required application program; and the storage data area may store data created by the use of the electronic device according to the method for processing parking, etc. In addition, the memory 902 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices. In some embodiments, the memory 902 may optionally include memories remotely provided with respect to the processor 901, and these remote memories may be connected to the electronic device of the method for recognizing a bill through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
The electronic device of the method for recognizing a bill may further include: an input apparatus 903 and an output apparatus 904. The processor 901, the memory 902, the input apparatus 903, and the output apparatus 904 may be connected through a bus or in other methods. In
The input apparatus 903 may receive input digital or character information, and generate key signal inputs related to user settings and function control of the electronic device of the method for processing parking, such as touch screen, keypad, mouse, trackpad, touchpad, pointing stick, one or more mouse buttons, trackball, joystick and other input apparatuses. The output apparatus 904 may include a display device, an auxiliary lighting apparatus (for example, LED), a tactile feedback apparatus (for example, a vibration motor), and the like. The display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
Various embodiments of the systems and technologies described herein may be implemented in digital electronic circuit systems, integrated circuit systems, dedicated ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: being implemented in one or more computer programs that may be executed and/or interpreted on a programmable system that includes at least one programmable processor. The programmable processor may be a dedicated or general-purpose programmable processor, and may receive data and instructions from a storage system, at least one input apparatus, and at least one output apparatus, and transmit the data and instructions to the storage system, the at least one input apparatus, and the at least one output apparatus.
These computing programs (also referred to as programs, software, software applications, or codes) include machine instructions of the programmable processor and may use high-level processes and/or object-oriented programming languages, and/or assembly/machine languages to implement these computing programs. As used herein, the terms “machine readable medium” and “computer readable medium” refer to any computer program product, device, and/or apparatus (for example, magnetic disk, optical disk, memory, programmable logic apparatus (PLD)) used to provide machine instructions and/or data to the programmable processor, including machine readable medium that receives machine instructions as machine readable signals. The term “machine readable signal” refers to any signal used to provide machine instructions and/or data to the programmable processor.
In order to provide interaction with a user, the systems and technologies described herein may be implemented on a computer, the computer has: a display apparatus for displaying information to the user (for example, CRT (cathode ray tube) or LCD (liquid crystal display) monitor); and a keyboard and a pointing apparatus (for example, mouse or trackball), and the user may use the keyboard and the pointing apparatus to provide input to the computer. Other types of apparatuses may also be used to provide interaction with the user; for example, feedback provided to the user may be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and any form (including acoustic input, voice input, or tactile input) may be used to receive input from the user.
The systems and technologies described herein may be implemented in a computing system that includes backend components (e.g., as a data server), or a computing system that includes middleware components (e.g., application server), or a computing system that includes frontend components (for example, a user computer having a graphical user interface or a web browser, through which the user may interact with the implementations of the systems and the technologies described herein), or a computing system that includes any combination of such backend components, middleware components, or frontend components. The components of the system may be interconnected by any form or medium of digital data communication (e.g., communication network). Examples of the communication network include: local area networks (LAN), wide area networks (WAN), and the Internet.
The computer system may include a client and a server. The client and the server are generally far from each other and usually interact through the communication network. The relationship between the client and the server is generated by computer programs that run on the corresponding computer and have a client-server relationship with each other.
Artificial intelligence is a subject of studying computers to simulate certain human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.). It has both hardware-level technologies and software-level technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, and big data processing. And artificial intelligence software technologies mainly include several major directions such as computer vision technology, speech recognition technology, natural speech processing technology, machine learning/depth learning, big data processing technology, and knowledge graph technology.
According to the technical solution in the present disclosure, a bill image is first acquired. Afterwards, the bill image is inputted into a feature extraction network layer of a pre-trained bill recognition model, to obtain a bill key field feature map and a bill key field value feature map of the bill image. Next, the bill key field feature map is inputted into a first head network layer of the bill recognition model to obtain a bill key field. Then, the bill key field value feature map is processed by using a second head network layer of the bill recognition model, to obtain a bill key field value, the feature extraction network layer being respectively connected with the first head network layer and the second head network layer. Finally, structured information of the bill image is generated based on the bill key field and the bill key field value. According to the present disclosure, the bill key field feature map and the bill key field value feature map are obtained by using the feature extraction network layer of the bill recognition model. Then, based on the first head network layer and the second head network layer of the bill recognition model, the recognition for the bill key field and the bill key field value included in the structured information in the bill image may be accurately implemented.
It should be understood that the various forms of processes shown above may be used to reorder, add, or delete steps. For example, the steps described in the present disclosure may be performed in parallel, sequentially, or in different orders. As long as the desired results of the technical solution disclosed in the present disclosure may be achieved, no limitation is made herein.
The above particular embodiments do not constitute limitation on the protection scope of the present disclosure. Those skilled in the art should understand that various modifications, combinations, sub-combinations and substitutions may be made according to design requirements and other factors. Any modification, equivalent replacement and improvement made within the spirit and principle of the present disclosure shall be included in the protection scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202011501307.1 | Dec 2020 | CN | national |