The present disclosure relates in general to the field of computer software and systems, and in particular, to a system and method for advanced document redaction.
The advent of cloud-based hosting services has enabled many opportunities for service developers to offer additional services that are of much utility to users. To offer these services, a service provider may process a large set of documents for a large number of users in an effort to determine particular patterns in the documents that are indicative of a need for a particular service. To illustrate, a service provider may process messages from an on-line retailer and determine that an order confirmation includes data describing a product and a delivery date. Using this information, the service provider may generate an automatic reminder for a user that serves to remind the user the product is to be delivered on a certain day.
Such information derived from the documents and that is used by a service provider to provide services is generally referred to as a “document data collection.” A document data collection can take different forms, depending on how the data are used. For example, a document data collection can be a cluster of documents or a cluster of terms from the documents, where the data are clustered according to a content characteristic. Example content characteristics include the document being a confirmation e-mail from an on-line retailer, or messages sent from a particular host associated with a particular domain, etc. Another type of document data collection is a template that describes content of the set of documents in the form of structural data. Other types of document data collections can also be used.
A service provider may need to analyze and modify the document data collection to improve the performance of the services that utilize the collection. Examination of private data, however, is often prohibited, i.e., a human reviewer cannot view or otherwise have access to the document data collection. Usually during the generation of the document data collection any private user information is removed and not stored in the document data collection; regardless, examination by a human reviewer is still prohibited to preclude any possibility of an inadvertent private information leak. While such privacy safeguards are of great benefit to users, analyzing and improving the quality of the document data collection and the services that use the document data collection can be very difficult due to the access restrictions.
A system and method for advanced document redaction are disclosed. According to one embodiment, a system comprises a parser that analyzes documents to identify structured, semi-structured, and unstructured data from a document. A candidates generator generates a list of words for redaction from the structured, semi-structured, and unstructured data. A replacement engine replaces one or more words from the list of words with one or more of a replacement word, random characters, and random numbers.
The above and other preferred features, including various novel details of implementation and combination of elements, will now be more particularly described with reference to the accompanying drawings and pointed out in the claims. It will be understood that the particular methods and apparatuses are shown by way of illustration only and not as limitations. As will be understood by those skilled in the art, the principles and features explained herein may be employed in various and numerous embodiments.
The accompanying figures, which are included as part of the present specification, illustrate the various embodiments of the presently disclosed system and method and together with the general description given above and the detailed description of the embodiments given below serve to explain and teach the principles of the present system and method.
While the present disclosure is subject to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. The present disclosure should be understood to not be limited to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
A system and method for advanced document redaction are disclosed. According to one embodiment, a system comprises a parser that analyzes documents to identify structured, semi-structured, and unstructured data from a document. A candidates generator generates a list of words for redaction from the structured, semi-structured, and unstructured data. A replacement engine replaces one or more words from the list of words with one or more of a replacement word, random characters, and random numbers.
The following disclosure provides many different embodiments, or examples, for implementing different features of the subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
The present system brings documents out of a customer environment so that issues can be debugged in the service provider's environment. Without the documents in the service provider's environment debugging can take more time and expense. Hence even though a document is a redacted or obfuscated version of a customer's original document, it is important that the service provider's information extraction system work on that document in the same way as it would with the original document. In other words, the extracted content for both the documents match even though the words might be different. For example: if we need to extract first name of a person where the text is “First Name: Ram”, it is important that the label ‘First Name’ is not redacted while the value ‘Ram’ can be redacted.
The present system allows confidential documents to be shipped from client environments to a document services provider knowing that the confidential information in those documents has been redacted. This allows the document services system to be better configured with metadata preparations when training machine learning models that benefit from training on real world data. The present redaction system retains the original look and feel of the document and maintains the grammatical integrity of the text after redaction.
The present system handles redactions of PDF documents and also replaces confidential information in the PDF by taking into account the width of the original characters of the confidential information. The present system also redacts non-OCR data (e.g., images, etc.) that may be confidential, while maintaining the lines that form cells or tables. The present system works on structured and semi-structured data to infer patterns and collocation between different values and their labels. This is helpful for ontology discovery and processing. The present system works on unstructured data to retain grammatical integrity and make the content, parser friendly.
Typically, extraction models are trained on data points such as template keywords, table headers, titles, hierarchy of the document, etc. Many times, the confidential data in a document is most often the result of the information extraction process. Accordingly, even if the confidential data is redacted from a set of documents, it will not affect the information extraction model training or application. To further avoid redaction of data points responsible for model training, all such data points can be provided in advance to safely be ignored by the present redaction engine. The present advanced redaction system first finds the data to be redacted and then replaces confidential data with non-confidential data.
According to one embodiment, the information extraction system 260 operates on a large volume of unredacted documents to identify relevant information used to provide services to end users. To illustrate, a service provider may process messages from an on-line retailer and determine that an order confirmation includes data describing a product and a delivery date. Using this information, the service provider may generate an automatic reminder for a user that serves to remind the user the product is to be delivered on a certain day. The extracted information may also be used for data analytics, report generation, etc. For example, the extracted information may be used to check if an invoice adheres to terms in a contract. The extracted information may also be used to reconcile two documents (e.g., check aggregate total amounts from a table with a total amount mentioned in a paragraph in natural language.) The extracted information may also be used to extract information from different reports into a normalized template.
Unredacted documents 210 and redacted documents 240 may be PDF documents. Although the present specification is described with an emphasis on PDF files, the present advanced redaction system does support other formats like WORD™ documents, Spreadsheets, Presentations, HTML files and text files. Furthermore, although databases 231-234 are individually identified, they may be contained in a single storage system locally, or in the cloud.
Using the semi-structured content 322 and unstructured content 323, the candidates generator 332 and candidates generator 334 generate redaction candidate terms 333, 335.
Candidates generator 332 that processes semi-structured data 322 uses semi-structured metadata 331 to generate redaction candidates 333. Candidates generator 334 that processes unstructured data 323 uses NLP metadata 336 to generate redaction candidates 335.
The semi-structured metadata 331 includes the choice between two techniques—Unique Words or Less Frequent Words. The semi-structured metadata 331 also includes a threshold for less frequent words. According to one embodiment, the less frequent word threshold is between 0 to 1 and is the ratio between the number of documents in which the term occurs and the total number of documents. According to one embodiment, the less frequent threshold can be set at 0.1 or 10%. Thus if the term occurs in less than 10% documents, system 200 deemed the term less frequent.
The NLP metadata 336 includes:
The replacement engine 337 uses the replacement metadata 339 to generate replacement words/text for the redaction candidates 333, 335.
The PDF evaluator 338 is used if the source document 310 is PDF, evaluates whether the generated replacement word/text generated by the replacement engine 337 fits accurately in place of the source word/text of document 310. If a replacement word/text/phrase does not fit accurately, the PDF evaluator 338 may choose to find another replacement word/text/phrase from the replacement metadata database 339, according to one embodiment. In another embodiment, the PDF evaluator may reduce the font size of the replacement word/text/phrase to fit within the redacted document 340.
In PDF format, each character in a document has positional information associated with it (e.g., the x and y coordinates, width and height). Hence the present advanced redaction system 200 chooses text/word/phrase replacements that fit the size reserved for the source text/word/phrase in the original document. If this is not addressed, the replacement word may spill over to the next word, or leave a greater than normal space between the adjacent word and itself. If such an accurate replacement is not located, the font size is adjusted to fit the space within the PDF document 340 correctly.
Replacement Engine (337)
For every confidential data (redaction candidates 333, 335) found in the document 310, the replacement engine 337 carries out the replacements within the document 310 such that the eventual Information Extraction process performed by information extraction system 260 is unhindered. The replacement engine 337 also ascertains that the text/word/phrase replacements are dimensionally equivalent to the source for PDF files. For redaction candidates 333, 335 that are numerical, the digits are individually randomized.
Replacement engine 337 replaces confidential data using two techniques—dictionaries and randomizing characters in a word.
Using Dictionaries of Replacements text/words/phrases: For structured/semi-structured data 322, dictionaries are maintained based on the length of the text/word/phrase to be redacted. For example all 3 letter words are grouped together in one dictionary stored in replacement metadata 339. The Replacement Engine 337 may maintain dictionaries in replacement metadata 339 for up to 20 letter words/text/phrases. The dictionary entries may be configured or revised using ARUI (Advanced Redaction User Interface 225). For unstructured data 323, the dictionaries in replacement metadata 339 are expanded on different Part of Speech (POS) tags that are to be redacted. For example, a three word dictionary for nouns, verbs and adjectives.
Randomizing every character of the word: With this technique, replacement engine 337 replaces every character of the word with a random character or letter of the alphabet.
According to another embodiment, the present system 300 randomizes selected words and phrases. Given a list of words/text/phrases that represent confidential information, every occurrence of those words/text/phrases is randomized at a character level. Such an obfuscation is achieved by randomizing each character of those words/text/phrases appearing in the document. System 300 performs obfuscation such that the original and the replaced words/text/phrases take the same amount of space on the page. For example, if the letter ‘i’ is replaced with letter ‘x’, the resulting width of the word may be larger than the original word. Accordingly, system 300 considers the width of the letter ‘i’ and replaces it with a similar width letter (e.g., ‘l’, ‘1’, T, etc.). Also if the appropriate flag is enabled, system 300 randomizes digits across the document. System 300 allows for a list of exceptions (e.g., words/text/phrases) that should not be redacted (e.g., dates in a financial statement that are signaled by a particular word, etc.) For example, “Name: Ramesh” becomes “Name: Kjhgfd” and “Age: 30 Yrs” is not changed/redacted/obfuscated.
According to another embodiment, the present system 300 randomizes everything in a document except metadata keywords identified in advance (e.g., name, age). This process applies redaction at its fullest and still allows the information extraction pipeline. The present system avoids redaction of metadata and specific patterns (e.g., dates in financial statements). For example, “Name: Ramesh” becomes “Name: Kjhgfd” and “Age: 30 Yrs” becomes “Age: 41 Xyz”.
Advanced Redaction User Interface (ARUI) (225)
An ARUI is provided where a user can manually apply or undo redaction on specific terms of the document. Additionally, the user may configure the replacement engine metadata 339, specifically the dictionaries of replacements 411 and the ignore keywords 412, 460, as well as the Semi-Structured Metadata 331 and the NLP Metadata 336.
Candidates Generator (221)
As explained above, candidates generator 221 is divided in two categories depending on the data it handles (e.g., structured/semi-structured data 322 and unstructured data 323). It is important to note that for the redaction processes described in this specification, each digit of a number that is a redaction candidate 333, 335 is individually randomized.
The process for generating redaction candidates 333 from structured/semi-structured data considers both unique words and less frequent words.
Unique words: For unique words found throughout the document 310 by the candidates generator 332, the replacement engine 337 finds their replacements from the replacement metadata 339. Such replacements are applied over each occurrence of the unique word across the document 310. Additionally, the replacement engine 337 ignores unique words from redaction that were used in the model training exercise. These unique words are metadata for the information extraction engine 320 and are utilized by the information extraction pipeline to extract the structured, semi-structured and unstructured information. If the POS metadata is redacted, the respective information will not be extracted by the system 300, accurately. The information extraction engine 320 uses metadata and user configurations to configure the system 300 for a particular project/customer. These two together are referred as model training. The content of the model training data does not have confidential information, so the content can be safely skipped from redaction. In other words, if the candidate generator 332, 334 generates a set of unique words for redaction, the set of unique words is filtered based on the content considered during model training.
Less frequent words: Candidates generator 332 works on a group of similar documents or documents from the same template. For example, similar documents to document 610 are overlaid on top of each other to find the similar and dissimilar content. Candidates generator 615 identifies the terms “Account,” “Name,” “Age” and “Yrs” as frequently occurring terms in the template. Hence, candidates generator 615 ignores these terms and redaction engine 625 only replaces the words “Ramesh” and “Suresh.” The candidates generator 615 thus generates the terms that are unique or less frequent in the group of documents. Frequently occurring terms and phrases are not confidential, while the less frequent term/phrases are confidential. Here the candidates generator 615 considers the frequency of terms across documents and not frequency of terms within the document 610. A definable document overlap threshold for the percentage of documents in which the terms occur across documents determines whether it is a less frequent term to be replaced. This document overlap threshold is specified in semi-structured metadata 331. Candidates generator 675 operates in a similar manner as candidates generator 615 for identifying less frequent terms in a document 670. Replacement engine 685 then replaces the less frequent terms with randomized characters.
Candidate generator 334 generates candidates 335 from unstructured data 323 with the following:
POS Tags: In this process, candidates generator 334 breaks down unstructured content/data 323 into sentences and then each sentence is subjected to a POS (Part of Speech) tagger. The candidates for redaction 335 are generated based on NLP metadata 336. The replacement engine 337 analyzes each sentence. Replacement engine 337 generates replacements on specific POS tags as identified in replacement metadata 339, and specifically its dictionaries 411.
Parse Trees: This process focuses more on the grammatical integrity of the sentence after replacements. Here we replace the words based on the POS tag dictionary from POS process 700, but additionally tests the replaced sentence to determine if the replacement sentence's parse overlaps with the parse from the original sentence.
While the present disclosure has been described in terms of particular embodiments and applications, summarized form, it is not intended that these descriptions in any way limit its scope to any such embodiments and applications, and it will be understood that many substitutions, changes and variations in the described embodiments, applications and details of the method and system illustrated herein and of their operation can be made by those skilled in the art without departing from the scope of the present disclosure.
This application is a continuation of application Ser. No. 16/373,216, filed on Apr. 2, 2019, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6678822 | Morar | Jan 2004 | B1 |
7428701 | Gavin et al. | Sep 2008 | B1 |
8010845 | Rui | Aug 2011 | B2 |
8627483 | Rachlin | Jan 2014 | B2 |
8640252 | Cragun | Jan 2014 | B2 |
8700991 | Gigliotti | Apr 2014 | B1 |
9442923 | Abou Mahmoud | Sep 2016 | B1 |
9477913 | Csurka | Oct 2016 | B2 |
9734148 | Bendersky et al. | Aug 2017 | B2 |
9754127 | König | Sep 2017 | B2 |
9754128 | Golic | Sep 2017 | B2 |
9836623 | Carasso | Dec 2017 | B2 |
9916464 | Asthana | Mar 2018 | B2 |
10032046 | Hayashi | Jul 2018 | B1 |
10043035 | LaFever | Aug 2018 | B2 |
10061773 | Mankovskii | Aug 2018 | B1 |
10061937 | Hayashi | Aug 2018 | B1 |
10133879 | Hayashi | Nov 2018 | B2 |
10275461 | Abou Mahmoud | Apr 2019 | B2 |
10311249 | Sharifi | Jun 2019 | B2 |
10372940 | Ehrenschwender | Aug 2019 | B2 |
10394873 | Mandpe | Aug 2019 | B2 |
10395772 | Lucas | Aug 2019 | B1 |
10437931 | Matskevich et al. | Oct 2019 | B1 |
10503928 | Stankiewicz | Dec 2019 | B2 |
10565173 | Hemachandran | Feb 2020 | B2 |
10573312 | Thomson | Feb 2020 | B1 |
10592694 | Carasso | Mar 2020 | B2 |
10621378 | Kim | Apr 2020 | B1 |
10621379 | Kim | Apr 2020 | B1 |
10635788 | Kim | Apr 2020 | B2 |
10762297 | Deleris | Sep 2020 | B2 |
10769308 | Hayashi | Sep 2020 | B2 |
10802891 | MacLeod | Oct 2020 | B2 |
10805071 | Wold | Oct 2020 | B2 |
10902202 | Aumann | Jan 2021 | B2 |
11080513 | Shuster | Aug 2021 | B2 |
11263335 | Parthasarathy | Mar 2022 | B2 |
11436520 | Beveridge | Sep 2022 | B2 |
11586815 | Olshanetsky | Feb 2023 | B2 |
20060074833 | Gardner | Apr 2006 | A1 |
20060074836 | Gardner | Apr 2006 | A1 |
20080240425 | Rosales | Oct 2008 | A1 |
20090089663 | Rebstock et al. | Apr 2009 | A1 |
20090138766 | Rui | May 2009 | A1 |
20090265788 | Ehrenschwender | Oct 2009 | A1 |
20100131551 | Benzaken et al. | May 2010 | A1 |
20100162402 | Rachlin | Jun 2010 | A1 |
20100250497 | Redlich | Sep 2010 | A1 |
20110113049 | Davis | May 2011 | A1 |
20110119576 | Aumann | May 2011 | A1 |
20120029908 | Takamatsu | Feb 2012 | A1 |
20130298246 | Cragun | Nov 2013 | A1 |
20130332194 | D'Auria | Dec 2013 | A1 |
20140359782 | Golic | Dec 2014 | A1 |
20140379377 | Konig | Dec 2014 | A1 |
20160063191 | Vesto et al. | Mar 2016 | A1 |
20160132277 | Csurka | May 2016 | A1 |
20160224804 | Carasso | Aug 2016 | A1 |
20160300075 | Stankiewicz | Oct 2016 | A1 |
20160321468 | Stankiewicz | Nov 2016 | A1 |
20170109346 | Shoshan | Apr 2017 | A1 |
20170124037 | Hayashi | May 2017 | A1 |
20170147559 | Abou Mahmoud | May 2017 | A1 |
20170168998 | Asthana | Jun 2017 | A1 |
20170243028 | LaFever | Aug 2017 | A1 |
20170244556 | Wold | Aug 2017 | A1 |
20170364523 | Mandpe | Dec 2017 | A1 |
20180046829 | Carasso | Feb 2018 | A1 |
20180060305 | Deleris | Mar 2018 | A1 |
20180218173 | Perkins et al. | Aug 2018 | A1 |
20180232407 | Hemachandran | Aug 2018 | A1 |
20180260734 | Beveridge | Sep 2018 | A1 |
20180276393 | Allen | Sep 2018 | A1 |
20180285591 | Thayer | Oct 2018 | A1 |
20180285592 | Sharifi | Oct 2018 | A1 |
20180293283 | Litoiu | Oct 2018 | A1 |
20190018983 | Anderson | Jan 2019 | A1 |
20190042792 | Hayashi | Feb 2019 | A1 |
20190104124 | Buford | Apr 2019 | A1 |
20190114360 | Garg et al. | Apr 2019 | A1 |
20190138326 | Horst | May 2019 | A1 |
20190197296 | Shuster | Jun 2019 | A1 |
20190236310 | Austin | Aug 2019 | A1 |
20190289034 | Erez | Sep 2019 | A1 |
20190294672 | Matskevich et al. | Sep 2019 | A1 |
20190333607 | Pletea | Oct 2019 | A1 |
20190362093 | Horst | Nov 2019 | A1 |
20200012815 | Nishimura | Jan 2020 | A1 |
20200012816 | Nishimura | Jan 2020 | A1 |
20200012892 | Goodsitt | Jan 2020 | A1 |
20200034520 | Kim | Jan 2020 | A1 |
20200034565 | Kim | Jan 2020 | A1 |
20200043019 | Hadavand | Feb 2020 | A1 |
20200106749 | Jain | Apr 2020 | A1 |
20200111021 | Keyngnaert | Apr 2020 | A1 |
20200126663 | Lucas | Apr 2020 | A1 |
20200133744 | MacLeod | Apr 2020 | A1 |
20200175987 | Thomson | Jun 2020 | A1 |
20200250139 | Muffat | Aug 2020 | A1 |
20200293714 | Olshanetsky | Sep 2020 | A1 |
20200311304 | Parthasarathy | Oct 2020 | A1 |
20210133557 | Iyoob | May 2021 | A1 |
20210303791 | Pletea | Sep 2021 | A1 |
20210357512 | Busila | Nov 2021 | A1 |
Number | Date | Country |
---|---|---|
3061638 | Nov 2018 | CA |
Entry |
---|
Merriam-Webster, definition for the word “feed”, archived in the Wayback Machine on Oct. 21, 2018; 21 pages, https://www.merriam-webster.com/dictionary/feed, https://web.archive.org/web/20181021072543/https://www.merriam-webster.com/dictionary/feed (Year: 2018). |
Dictionary.com, “List”, https://www.dictionary.com/browse/list, (2022), 10 pages. |
Dictionary.com, “List”, https:/www.dictionary.com/browse/list https://web.archive.org/web/20170512152507, (May 12, 2017), 11 pages. |
John English on Quora, “What is the difference between “masking” and “redacting”?”, https/www.quora.com/What-is-the-difference-between-masking-and-redacting, (Mar. 28, 2017), 8 pages. |
Larsenal, “A list with only one item”, https:english.stackexchange.com/questions/2370/a-list-with-only-one-item, (2010), 4 pages. |
Merriam-Webster, “List”, https://www.merriam-webster.com/dictionary/list, (2022), 23 pages. |
PCT/EP2020/059470 International Search Report and Written Opinion, dated Jul. 10, 2020. |
Satori, “The Fundamentals of Data Redaction”, https://satoricyber.com/data-masking/the-fundamentals-of-data-redaction/, (2022), 7 pages. |
Number | Date | Country | |
---|---|---|---|
20230205988 A1 | Jun 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16373216 | Apr 2019 | US |
Child | 18085998 | US |