The present subject matter is directed generally to data extraction, and more particularly to unstructured data analysis to generate a structured data output based on customizable template rules.
Given the large amounts of data related to any facet of life, it is no wonder that any manual review of even relatively small amounts of documents can prove to be time consuming, tedious, and expensive. This is the case for any manual review process implemented with respect to, e.g., claim processing. In these cases, claim processing can involve large amounts of documents that need to be reviewed to find, identify, and extract data that is relevant to a particular case, such as client and involved parties information, and factual evidence. Complicating the process is the fact that most documents that need to be reviewed are not structured documents, in the sense that the documents include natural language expressions rather than structured language fields. Claim processors thus must parse through the large amounts of volumes looking for relevant information, which may lead to missed information, and, in the best of cases, may be a very expensive process.
Some solutions have been proposed to address the challenges with manual document review, most involving computer-assisted review. In one particular solution, a system provides functionality to recognize and extract all specific items, such as entities, dates, etc. However, this solution offers no semantic context to the extracted data. As such, a user must still parse through the extracted items, without context, to identify desired data. Thus, this solution offers marginal improvements.
Another solution that has been proposed involves more sophisticated data extraction methods, such as using business rules or machine learning algorithms. In some cases, the extraction algorithms need to be trained by the user before they can be applied. However, in these cases, the rules and algorithms may be hardcoded and non-transparent. These extraction algorithms are essentially a black box that does not provide transparency into the extraction process or allow a user to make dynamic modifications. Thus, these solutions are inflexible.
The present application relates to systems and methods for providing computer-assisted guided review of unstructured data to generate a structured data output based on customizable template rules. In one particular embodiment, a method of generating a structured report from unstructured data may be provided. The method may include receiving at least one input file containing the unstructured data, and selecting a predefined template. The predefined template may include a plurality of fields, each field corresponding to a field of the structured report. The predefined template may define at least one extraction rule for one or more fields in the plurality of fields of the predefined template. The at least one extraction rule may define parameters for identifying data in the unstructured data of the at least one input file that is relevant to the corresponding field of the predefined template. The method may also include applying the at least one extraction rule to the at least one input file to identify the data that is relevant to the field associated with the corresponding at least one extraction rule. The method may further include confirming the data identified as relevant. Confirming the data identified as relevant may include determining to refine the data identified as relevant to the field associated with the corresponding at least one extraction rule based on at least one condition of the data identified as relevant, and modifying, in response to the determining, the at least one extraction rule associated with the field to refine the data identified as relevant to the field.
In another embodiment, a system for generating a structured report from unstructured data may be provided. The system may include at least one unstructured document source, and a server. The server may, be configured to receive at least one unstructured document and a user input to select a predefined template. The predefined template may include a plurality of fields, each field corresponding to a field of the structured report. The predefined template may define at least one extraction rule for one or more fields in the plurality of fields of the predefined template, and the at least one extraction rule may define parameters for identifying data in the unstructured data of the at least one unstructured document that is relevant to the corresponding field of the predefined template. The server may also be configured to apply the at least one extraction rule to the at least one unstructured document to identify the data that is relevant to the field associated with the corresponding at least one extraction rule. The server may be further configured to confirm the data identified as relevant. Confirming data identified as relevant may include determining to refine the data identified as relevant to the field associated with the corresponding at least one extraction rule based on at least one condition of the data identified as relevant, and modifying, in response to the determining, the at least one extraction rule associated with the field to refine the data identified as relevant to the field.
In yet another embodiment, a computer-based tool for generating a structured report from unstructured data may be provided. The computer-based tool may include non-transitory computer readable media having stored thereon computer code which, when executed by a processor, causes a computing device to perform operations that may include selecting a predefined template. The predefined template may include a plurality of fields, each field corresponding to a field of the structured report. The predefined template may define at least one extraction rule for one or more fields in the plurality of fields of the predefined template, and the at least one extraction rule may define parameters for identifying data in at least one unstructured document that is relevant to the corresponding field of the predefined template. The operations may also include displaying data identified as relevant to the one or more fields of the plurality of fields. The data identified as relevant to the one or more fields may be identified based on an application of the at least one extraction rule associated with a corresponding field to the at least one unstructured document. The operations may further include confirming the data identified as relevant. Confirming the data identified as relevant may include determining to refine the data identified as relevant to the field associated with the corresponding at least one extraction rule based on at least one condition of the data identified as relevant, and causing modification, in response to the determining, of the at least one extraction rule associated with the field to refine the data identified as relevant to the field.
The foregoing broadly outlines the various aspects and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention.
For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Various features and advantageous details are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known starting materials, processing techniques, components, and equipment are omitted so as not to unnecessarily obscure the invention in detail, it should be understood, however, that the detailed description and the specific examples, while indicating embodiments of the invention, are given by way of illustration only, and not by way of limitation. Various substitutions, modifications, additions, and/or rearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure.
It is noted that the functional blocks, and components thereof, of system 100 of embodiments of the present invention may be implemented using processors, electronics devices, hardware devices, electronics components, logical circuits, memories, software codes, firmware codes, etc., or any combination thereof. For example, one or more functional blocks, or some portion thereof, may be implemented as discrete gate or transistor logic, discrete hardware components, or combinations thereof configured to provide logic for performing the functions described herein. Additionally or alternatively, when implemented in software, one or more of the functional blocks, or some portion thereof, may comprise code segments operable upon a processor to provide logic for preforming the functions described herein.
Unstructured data files 190 may comprise at least one document including unstructured data. Unstructured data may refer to information expressed in natural language, may include information structured differently than the desired output report (e.g., as indicated by a predefined template), and may include information structured differently in different files of unstructured data files 190. Unstructured data files 190 may include files having various formats (e.g., pdf, txt, doc, etc.). In one particular example, content of data files of unstructured data files 190 may include information related to claims, such as personal injury claims, insurance claims, etc. Information related to particular aspects of a claim may be spread over a particular document, or documents, in the unstructured data files 190. For example, information related to a period of employment of a particular person may be included in different sections of a document, or documents. Similarly, a date of birth of a person may be in some section of some document, or documents. From this, it will be appreciated that identifying and extracting such information from unstructured data files 190 manually may be difficult, long, and tedious. Even using existing automated systems, which may extract all dates, a user may have to go through all dates to manually filter the correct desired date. As will be further appreciated, aspects of the present disclosure provide a mechanism to alleviate and obviate the deficiencies of existing systems.
User terminal 170 may be implemented as a mobile device, a smartphone, a tablet computing device, a personal computing device, a laptop computing device, a desktop computing device, a computer system of a vehicle, a personal digital assistant (PDA), a smart watch, another type of wired and/or wireless computing device, or any part thereof. User terminal 170 may be configured to provide a GUI structured to facilitate input and output operations in accordance with aspects of the present disclosure. Input output operations may include operations for selecting data files from unstructured data files 190 for input to server 110, selecting a predefined template to apply to the selected files to identify relevant content based on the extraction rules in the selected predefined template, validating the identified relevant content, modifying the extraction rules to refine the extraction process, and selecting relevant content to include in the output report. These functions are described in more detail below. In some embodiments, users may create the predefined templates. Creating the predefined templates may include creating and/or specifying extraction rules to be included in the predefined templates. Aspects for creation of predefined templates and extraction rules are described in more detail below.
It is noted that, in some embodiments, system 100 may be configured with different levels of users. For example, users may be assigned an admin level or a user level. Admin level may be higher than user level, and may include more and/or higher privileges than user level. For example, an admin may be allowed to make configuration changes and to specify an outlay of the GUI. In addition, the admin may be allowed to create predefined templates, while a user may be allowed to select predefined templates but not create them. In embodiments, an admin may also be allowed to create extraction rules and assign them to particular sections of the predefined template, while a user may be allowed to modify the extraction rules but not reassign them from the particular sections to which the extraction rules are assigned.
Server 110, user terminal 170, and unstructured data files 190 may be communicatively coupled via network 180. Network 180 may include a wired network, a wireless communication network, a cellular network, a cable transmission system, a Local Area Network (LAN), a Wireless LAN (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), the Internet, the Public Switched Telephone Network (PSTN), etc., that may be configured to facilitate communications between server 110, user terminal 170, and unstructured data files 190.
Server 110 may be configured to receive as an input at least one unstructured data file in unstructured data files 190, to provide extraction of relevant content from the data files based on a predefined template and dynamically modifiable extraction rules, to facilitate modification of the dynamically modifiable extraction rules by a user, and to provide a structured output report based on the extracted relevant content. This functionality of server 110 may be provided by the cooperative operation of various components of server 110, as will be described in more detail below. Although
As shown in
Memory 112 may comprise one or more semiconductor memory devices, read only memory (ROM) devices, random access memory (RAM) devices, one or more hard disk drives (HDDs), flash memory devices, solid state drives (SSDs), erasable ROM (EROM), compact disk ROM (CD-ROM), optical disks, other devices configured to store data in a persistent or non-persistent state, network memory, cloud memory, local memory, or a combination of different memory devices. Memory 112 may comprise a processor readable medium configured to store one or more instruction sets (e.g., software, firmware, etc.) which, when executed by a processor (e.g., one or more processors of processor 111), perform tasks and functions as described herein.
Memory 112 may also be configured to facilitate storage operations. For example, memory 112 may comprise database 120 for storing user profile information (e.g., privilege levels, preference data, statistical data, etc.), predefined templates, extraction rules, etc., which system 100 may use to provide the features discussed herein. Database 120 is illustrated as integrated into memory 112, but may be provided as a separate storage module. Additionally or alternatively, database 120 may be a single database, or may be a distributed database implemented over a plurality of database modules.
Templates and rules module 150 may be configured to facilitate creation and configuration of predefined templates and extraction rules to be defined and included in the predefined templates. In some embodiments, a user with administrative privileges may use terminal 170 to create and configure, using the GUI, a predefined template using the functionalities of templates and rules module 150. A predefined template may include various fields and sections that correspond to field and sections of a structured output report. In that sense, a predefined template may be viewed as defining the structured output report. Templates and rules module 150 may also include functionality to allow the user to specify, for the various fields and sections, the information required to be included in those fields and sections of the template (and consequently in the structured output report). For example, a user may specify a name of “date of claim” for a particular field, and may specify that for the “date of claim” field, a date should be entered. Additionally, the user may also specify extraction rules that may be applied to the unstructured input files to obtain the “date of claim” date. These extraction rules will be discussed in more detail below. The same may be done for each field and section of the predefined template. The results may be, after operations in accordance with aspects of the present disclosure, a template in which each field includes relevant information extracted from the unstructured input files based on a corresponding extraction rule for the various fields of the template. It is noted that different templates may be created for different use cases and for different structured output reports. The extracted information may be then be used to generate a corresponding structured output report.
A structured output report may be designed to provide a quick reference view of information contained in one or more documents for an end user reviewing a work file. As shown in
Structured output report 250 may also include section 253 for including liability comments. In this case, a user may specify that this section may include statements. An extraction rule may be specified for section 253 that facilitates collection of any statement within the unstructured input files related to liability. As will be further explained below, this may include identifying and tagging sentences within the unstructured input files, and applying filters that identify sentences relevant to liability, such as by using keywords, semantic expressions, entities within the context of a keyword, etc.
Templates and rules module 150 may also be configured to facilitate modification of the extraction rules by a user during operations. In aspects, a user may edit the extraction rules to further refine the extraction of relevant content from the unstructured input files. For example, a predefined template field may require a claimant's date of birth. An extraction rule associated with this predefined template field may search for dates and extract all of the dates as potential matches to the date of birth. In this case, during operation, a user may modify the extraction rule to include a filter that extracts a date that is proximate to a keyword “DOB.” As a results, the potential matches are further refined based on the modification, which results in more accurate results being provided to the user. This functionality of templates and rules module 150 will be discussed in more detail below. In some aspects, template and rules module 150 may also include functionality to automatically refine the extraction rules based on a user selection. For example, where an extraction rule returns multiple matches, a user selecting one of the matches may cause templates and rules module 150 to refine the extraction rules to account for the user selection.
With reference back to
In another aspect, a Hypertext Markup Language (HTML) conversion approach may be used. In an HTML approach, the unstructured input file may be processed to obtain an HTML version of the unstructured input file. The HTML conversion may be accomplished using various commonly available tools, and/or customized tools. For example, the PDFMiner python package may be used to obtain an HTML version of the unstructured input file. The HTML version of the file respects line breaks, and also includes HTML tags that specify different sections of the unstructured input file (e.g., header, body, title, paragraphs, etc.). The HTML tags of the HTML version of the unstructured input file may be used to break up the unstructured input file into chunks, where each chunk may correspond to different sections of the unstructured input file. It will be appreciated that the chunks may be more manageable than the entire unstructured input file. As such, each chunk may then be processed to split the chunk into sentences. For example, NLP algorithms may be applied to the chunks to split the chunks into sentences. The NLP algorithms may obtained using various commonly available tools, and/or customized tools. For example, the NLTK python package may be used to split the chunks into sentences. The results of this approach is a textual representation of each section of the unstructured input file split into individual sentences.
It is noted that the text conversion approach works well for floating text, such as the body of an email. In some cases, the text conversion approach may extract subsentence levels, such as content from columns, headers, lists, and other structural content in a random order or as one big sentence. However, in this situation, the HTML conversion approach may work well, as it involves using structural components captured in HTML to break up content into chunks and allows extracting subsentence level items, such as contact information and bullet point items, etc., to be done more easily. As such, embodiments of the present disclosure may use a combined approach to split the unstructured input files into sentences, in which a combination of the text conversion and HTML conversion approaches may be used. It is also noted that the resulting individual sentences may or may not be semantically coherent. For example, a particular sentence may be a sentence such as “Please note that as much information as possible is provided, whether herein or in the enclosures.” As will be appreciated, this sentence is semantically coherent. Other particular sentences, however, may not be semantically coherent. For example, a sentence another sentence resulting from the splitting operations may be “someone@email.com,” which may not be semantically coherent by itself. This is because the splitting operations of split and tag module 130 split the unstructured input file into the sentences, while further operations of system 100, as discussed below, identify sentences which are statements, and those which are actually subsentence elements (e.g., dates, entities, times, values, special designations, identifications, email addresses, telephone numbers, etc.).
In some embodiments, prior to the splitting of the unstructured input files, the files may be processed to digitize the content within the files. For example, the unstructured input files in unstructured data files 190 may be scanned files, image files, and or other type of non-searchable files. In this case, the unstructured input files may be OCR'd (optical character recognition). In embodiments, the unstructured input files are further processed to refine white space and character recognition, to handle tables, tick boxes, line breaks, columns, and other structural elements, and to identify and integrate special symbols and images. This further functionality may be implemented using machine learning algorithms.
Split and tag module 130 may be configured to, subsequent to splitting the unstructured input files into sentences, identify and tag subsentence items. In aspects, identifying and tagging subsentence items may be accomplished using NLP algorithms. Subsentence items may include items that are not necessarily a sentence, and may include items such as dates, entities, times, values, special designations, identifications, email addresses, telephone numbers, etc. In some embodiments, identifying and tagging subsentence items may include performing named entity recognition and date tagging. Named entity recognition may include applying NLP algorithms to identify entities (e.g., organizations, facilities, groups, companies, countries, governments, persons, places, products, etc.). Named entity recognition may be accomplished using various commonly available tools, and/or customized tools. For example, OpenCalais may be used to identify and tag subsentence items as entities. In some embodiments, named entity recognition may also provide a positive identification of the entity. Date tagging may include identifying, normalizing, and extracting dates, times, and/or periods from the unstructured input file. Date tagging may be accomplished using various commonly available tools, and/or customized tools. For example, Stanford NLP's SUTime library may be used to identify, normalize, and extract dates, times, and/or periods from the unstructured input file. Additionally or alternatively, for example, the datefinder python package may be used to perform date tagging operations.
It will be appreciated that although the functionality of split and tag module 130 allows system 100 to split the unstructured input files into sentences and to identify and tag subsentence items, there is yet no relation between the different subsentence items, the sentences, and the unstructured input file. To provide such relations, split and tag module 130 provides fin indexing functionality. Indexing allows for providing a relationship between the subsentence items, the sentences, and the unstructured input file. In some implementations, indexing includes three search indices: a document level index, a sentence level index, and a subsentence index. In some embodiments, the subsentence index may be a document level subsentence index.
The document level index may include an ID field, a document type field, and a document content field. The ID field may be include the case number of the case associated with the document, and document number, and may be generated when the document is selected for input, or when the document is uploaded for OCR. The document type field may specify the type of document (e.g., letter of claim, tax schedule, letter of defense to claimant, etc.). The document content field may include the unstructured input file content as raw text.
The sentence level index may include a case ID field, a document ID field, a document type field, a sentence ID field, a sentence text field, a raw sentence level subsentence tag field, and a normalized sentence level subsentence tag field. For example, with reference to
As noted above, the subsentence index may be a document level subsentence index. In this case, the substance index may include a case ID field, a document ID field, a normalized subsentence item text field, a raw subsentence item text field, a context strings field, and an offset field. The case ID may include the ID of the case associated with the document, and the document ID field may include the ID of the document where a particular subsentence item is found. The case ID and document ID may be generated when the document is selected for input, or when the document is uploaded for OCR. The normalized subsentence item text field may include the normalized subsentence item text. For example, a particular subsentence item may be identified and tagged as an entity. In this case, the normalized subsentence item text field may include the normalized name of the entity. The raw subsentence item text field may include the raw subsentence item text as it appears in the unstructured input file. For example, where a particular subsentence item may be identified and tagged as an entity, the raw subsentence item text field may include the name of the entity as it appears in the unstructured input file. The context strings field may include strings within which the entity appears, or that are proximate to the subsentence item within the document. In aspects, the proximity may be configurable and may be specified as a number of words, letters, spaces, or characters from the subsentence item. The offset field may include a value indicating the location of the subsentence item within the unstructured input file relative to the start of the unstructured. Input file, or the location of the subsentence item within a sentence relative to the start of the sentence.
With reference back to
In aspects, the extraction rules of embodiments may include various rules for identifying information within an unstructured document relevant to a particular section of a predefined template. This may be accomplished using various search and filtering functions provided by search and filter module 140. In some cases, the extraction rules may include a combination of any of the following search and filtering functions. It will be appreciated that the following search and filtering functions are intended to be exemplary, and not limiting. Those of skill in the art will appreciate that other search and filtering functions may also be used to implement extraction rules. Additionally, it is noted that extraction rules may be included in predefined templates and may associated with particular fields. In embodiments, extraction rules may be defined in a predefined template in a delimit form. For example, the default extraction rules may include a combination of any of the search and filtering functions discussed below. During operation, a user may select a template having default extraction rules, which may be applied to the unstructured input files. As will be discussed in more detail below, a user may determine to modify the default extraction rule. For example, the user may determine to modify the default with rules in the predefined template to include any combination of the search and filtering functions discussed below, to include searches, filters, keywords, etc., in order to refine the results obtained for the associated field.
In an embodiment, the search and filtering functions may include text keyword filters that may be implemented to search for a particular keyword in a sentence or an entity. In this case, the sentence and subsentence indices may be leveraged to identify the indexed data of any sentence and/or entity in which the keyword appears. For example, the content may be filtered to identify all sentences containing the phrase “hearing protection.” The result according to aspects of the present disclosure would be not only identification of sentences containing the keyword, but also the sentence IDs, the case IDs, the document IDs, and any subsentence tag associate with that sentence.
Another search and filtering function may include an order of occurrence filter. In this case, the order of occurrence of a particular item (e.g., a sentence, or an entity) may be obtained by application of this filter. The order of occurrence may indicate the order of appearance of the item, either within an unstructured input file or within a sentence. For example, for a particular date, the particular date may be the first date mentioned in an unstructured input file. In this case, the order of occurrence may be found to be 1. In a particular application, a user may determine that the first date that appears within a document of type “letter of claim” may be the date of the claim. In this case, application of the order of occurrence filter may yield a result that can be leveraged to identify the particular date as the date of the claim. For example, an extraction rule associated with a date of claim section of a predefined template may specify that for documents of type “letter of claim,” a date with an occurrence of 1 may be determined to be the date of the claim.
Still another search and filtering function may include a search for relative quantities. For example, this search function may compare values and return the smaller or larger value in the unstructured input file or sentence. In some aspects, the object returned may be relative to a static value. For example, a filter may be defined to extract any date that falls before a given date. In some implementations, a filter may be defined to return items that appear before another given item. For example, a filter may return all dates that appear within a document before the last date that appears within the document.
Yet another search and filtering function may include a search for subsentence items within a context of a keyword. For example, a filter may be defined to extract periods of time that are lexically proximate to a keyword, such as “exposure.” In this case, the filter may return durations found within a sentence proximate to the term “exposed,” or semantically similar terms.
Search and filter module 140 may also be configured to provide functionality, to facilitate extraction of relevant statements within the unstructured input files. In some cases, a template may specify that for a particular section, statements relevant to a particular fact, or facts, are to be included. For example, in one particular case, an important factor for determining liability of an employer may be to determine whether the employer provided protective equipment to the claimant. In this case, a predefined template for generating a structured report for the case may include a section for including information regarding the provision of hearing protection. As will be appreciated, there are various ways of expressing information related to this fact. For example, this fact could be expressed in the text as “Our client was never provided with hearing protection,” or “Your company failed to provide adequate protective gear in the form of ‘hearing protection.’” Search and filter module 140 provides functionality to account for such textual differences when identifying and extracting factual information.
In aspects, one approach to facilitate extraction of relevant statements within the unstructured input files may include a Boolean keyword search. In this approach, statements including a keyword related to the factual information desired may be extracted. As such, this approach may filter the unstructured text to include only the results that match the query (e.g., any sentence containing “hearing OR protection”). In embodiments, the results may be ranked based on word overlap using, e.g., term frequency-inverse document frequency (TF-IDF) algorithms or similar statistical analysis.
Another approach to facilitate extraction of relevant statements within the unstructured input files may include a semantic similarity search. For example, a semantic textual similarly algorithm may be applied to the unstructured input files to identify sentences that are semantically similar. In this case, an extraction rule may be defined to include a semantic textual search using an input sentence. For example, following in the above case illustrated in
Content annotator and output generator 160 may be configured to provide functionality for annotating the content extracted from the unstructured input files based on the extraction rules to facilitate collection of relevant content to be included in the structured output report based on the predefined template. In aspects, the annotations to the extracted content may include highlighting, or otherwise marking, the relevant content within a graphical representation of the unstructured input file in a GUI. For example, as shown in
Content annotator and output generator 160 may also be configured to generate the structured output report based on the extracted relevant content associated with each of the predefined template fields and section. In aspects, the structured output report may be generated by populating the structured output report with the relevant content extracted for each corresponding field and section of the associated predefined template. In some embodiments, content annotator and output generator 160 may be configured to generate, structure, and populate the GUI provided by user terminal 170.
In general terms, embodiments of the present disclosure provide functionality for search capabilities that go beyond a basic keyword search. Aspects of the present disclosure allow for the combination and storage of not only keyword searches but also more advanced semantic searches, and for associating the searches to specific portions of a predefined template. As such, the information extraction and review process by an end-user is significantly improved. In addition, the various aspects providing for content annotation allow a user to more easily collect and link individual statements to a predefined template section (e.g., evidence for liability, evidence for limitation, etc.). This enables a user to rapidly build up a large set of annotated structured data, based on unstructured source documents. Furthermore, various aspects of the present disclosure provide the ability for a user to dynamically customize and review extraction rules, which creates a level of transparency that is lacking in existing systems. This also allows the user to describe and create extraction mechanisms for more complex concepts, such as “date of birth,” “defendant's name,” etc. Therefore, Applicant notes that the solution described herein is superior, and thus, provides an advantage over prior art systems.
One application of the techniques and systems disclosed herein may be in a claims processing environment. As noted above, claim processing involves analysis of large amounts of documents and data, which are usually unstructured. Typically, the documents are analyzed and reviewed manually by a user. The user reviews the document and parses the content to identify information relevant to a particular use. For example, a report may require certain data, which the user must then find and extract from the unstructured documents. In another example, there may be questions that may be answered by sections of the unstructured document, but the user must find, identify, and extract those sections from the unstructured document. Even in systems that use extraction algorithms, the extraction algorithms are usually a black box that does not provide transparency into the extraction process or allows a user to make dynamic modifications. Aspects of the present disclosure provide an advantageous system that allows a user to not only easily identify potential relevant content, but to also dynamically modify the extraction rules for a more flexible, responsive, and robust approach. It is again noted that the discussion that follows, which is directed to claim processing, is merely an example embodiment and should not be construed as limiting in any way.
At block 302, a user creates a case for a claim processing. For example, a user may determine to review an insurance claim, or a personal injury claim, and may create a new case for the claim. In some aspects, the case review may include generating a structured output report (e.g., structured output report 250 of
With the case created, the user selects a template to use at block 304. For example, with reference to
Referring back to
At block 308, the extraction rules defined in the selected template are applied to the content of the unstructured source documents in order to identify and extract the content relevant to the template fields associated with the corresponding extraction rules. As described above, prior to the application of the extraction rules, the content of the unstructured source documents may be split and tagged in accordance with the functionality of split and tag module 130 of
At optional block 310, the user may confirm the potential matches for each of the template fields. For example, for field 410 in
In embodiments, the selected template may define a section or sections for collection of evidence. In this case, the section may require statements, which may include sentences related to a particular type of evidence (e.g., liability, employment, etc.). In addition, themes may be specified for each type of evidence. For example, as shown in
With reference back to
It is appreciated that some or all such fields of a template may be refillable and the type of refinement may be dependent on source documents and a user preferences. Hence, the above rule modification is provided by way of example, and one of ordinary skill in the art would understand that various modifications may be possible when provided with the present system.
In embodiments, a progress bar 405 may be presented to the user to provide a visual indication of the fraction of information required that has been extracted. In aspects, a different visual indicator may be used to represent information that has been confirmed than to represent information that has not been confirmed. Therefore, as more potential matches are confirmed, the indicator in progress bar 405 increases.
Referring back to
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Skilled artisans will also readily recognize that the order or combination of components, methods, or interactions that are described herein are merely examples and that the components, methods, or interactions of the various aspects of the present disclosure may be combined or performed in ways other than those illustrated and described herein.
Functional blocks and modules in
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, a cloud storage facility, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium, in the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal, base station, a sensor, or any other communication device. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. Computer-readable storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, a connection may be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, or digital subscriber line (DSL), then the coaxial cable, fiber optic cable, twisted pair, or DSL, are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
The present application claims priority to U.S. Provisional Application No. 62/626,829, filed Feb. 6, 2018 and entitled, “CLAIMS ASSESSMENT,” the disclosure of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
20040163043 | Baudin | Aug 2004 | A1 |
20090119268 | Bandaru | May 2009 | A1 |
20090216746 | Aubin | Aug 2009 | A1 |
20110093481 | Hussam | Apr 2011 | A1 |
20110295854 | Chiticariu | Dec 2011 | A1 |
20150039651 | Kinsely | Feb 2015 | A1 |
20150309990 | Allen | Oct 2015 | A1 |
20180024982 | Fan | Jan 2018 | A1 |
20180267947 | Miller | Sep 2018 | A1 |
Number | Date | Country |
---|---|---|
3005148 | Apr 2016 | EP |
3724790 | Oct 2020 | EP |
Entry |
---|
International Search Report and Written Opinion issued for PCT Application No. PCT/IB2019/050962, dated May 24, 2019, 13 pages. |
Patent Cooperation Treaty, International Search Report and Written Opinion issued for PCT Application No. PCT/IB2022/053218, dated Jul. 5, 2022, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20190243841 A1 | Aug 2019 | US |
Number | Date | Country | |
---|---|---|---|
62626829 | Feb 2018 | US |