Text is frequently electronically received in a non-textually editable form. For instance, data representing an image of text may be received. The data may have been generated by scanning a hardcopy of the image using a scanning device. The text is not textually editable, because the data represents an image of the text as opposed to representing the text itself in a textually editable and non-image form, and thus cannot be edited using a word processing computer program, a text editing computer program, and so on. To convert the data to a textually editable and non-image form, optical character recognition (OCR) may be performed on the image, which generates data representing the text in a textually editable and non-image form, so that the data can be edited using a word processing computer program, a texting editing computer program, and so on.
As noted in the background section, data can represent an image of text, as opposed to representing the text itself in a textually editable and non-image form that can be edited using a word processing computer program, a text editing computer program, and so on. To convert the data to a textually editable and non-image form, optical character recognition (OCR) may be performed on the image. Performing OCR on the image generates data representing the text in a textually editable and non-image form, so that the data can be edited using a computer program like a word process computer program or a text editing computer program.
OCR can result in errors within the text in non-image form, due to stray marks on the image of the text being improperly considered when OCR is performed. As one example, the image of the text may be a scanned image of a page of a book. A person may have written notes in the margin of the page, and due to the age of the book, there may be dirt or other debris outside the text on the page. When OCR is performed, these stray marks are considered as part of the actual text of the page of the book, and the OCR attempts to convert the image of the stray marks into text in non-image form. OCR is typically unsuccessful when converting such stray marks, and even if the handwritten notes in particular are accurately converted into text in non-image form, they are typically not desired within the non-image form of the actual text of the page of the book.
Disclosed herein are techniques to remove characters that correspond to such handwritten notes and other stray marks within an image of text, from the data representing the text in non-image form. Data representing an image of text is received, as is data representing the text in non-image form, which may have been generated using OCR. A valid content boundary is determined within the image of the text. For each character within the text in the non-image form, the location of the character within the image of the text is determined. Where the location of the character within the image falls outside the valid content boundary, the character is removed from the data representing the text in the non-image form.
Data representing an image of text, and data representing the text in non-image form, are received (102). The data representing the image of the text may be bitmap data in BMP, JPG, or TIF file format, among other image file formats. The data representing the image is not textually editable by computer programs like word processing and text editing computer programs. By comparison, the data representing the text in non-image form may be formatted in accordance with the ASCII or Unicode standard, for instance, and may be stored in a TXT, DOC, or RTF file format, among other text-oriented file formats. The data representing each character of the text can include a byte, or more than one byte, for each character of the text, in accordance with a standard like the ASCII or Unicode standard, among other standards to commonly represent such characters.
For example, consider a letter “q” in the text. A collection of pixels corresponds to the location of this letter within the image of the text. If the image is a black-and-white image, each pixel is on or off, such that the collection of on-and-off pixels forms an image of the letter “q.” Note that this collection of pixels may differ depending on how the image was generated. For instance, one scanning device may scan a hardcopy of the text such that there are little or no artifacts (i.e., extraneous pixels) within the part of the image corresponding to the letter “q.” By comparison, another scanning device may scan the hardcopy such that there are more artifacts within the part of the image corresponding to this letter.
From the perspective of a user, the user is able to easily distinguish the part of each image as corresponding to the letter “q.” However, the portions of the images corresponding to the letter “q” are not identical to one another, and are not in correspondence with any standard. As such, without performing a process like OCR, a computing device is unable to discern that the portion of each image corresponds to the letter “q.”
By comparison, consider the letter “q” within the text in a non-image form that may be textually editable. The letter is in accordance with a standard, like the ASCII or Unicode standard, by which different computing devices know that this letter is in fact the letter “q.” From the perspective of a computing device, the computing device is able to discern that the portion of the data representing this letter within the text indeed represents the letter “q.”
The data representing the text in non-image form may be generated from the image of the text (104). For example, the data representing the image of the text may be input to an OCR engine (106). The OCR engine is a computer program that performs OCR on the image of the text. In response, output is received from the OCR engine (108). This output is the data representing the text in non-image form.
A valid content boundary is determined within the image of the text (110). The valid content boundary is the boundary around a portion of the image of the text that corresponds to the actual desired text (i.e., the valid content) within the image. By comparison, other portions of the image, which are outside of the valid content boundary, do not correspond to the actual desired text. These other portions may include images of stray marks, such as images of dirt and debris, as well as images of handwritten notes.
A valid content boundary 204 is determined within the image 200. The valid content boundary 204 surrounds the image 202 of the actual text of the page, such that the image 202 is located within the content boundary 204. By comparison, the valid content boundary 204 excludes the images 206 and 208, which are not images of the actual text of the page. That is, the images 206 and 208 are located outside the valid content boundary 204.
It is noted that the data representing the text in non-image form includes data representing text of the image 202, as well as data representing purported text of the images 206 and 208. For instance, OCR may have been performed on the entire image 200, including the images 206 and 208 as well as the image 202. As such, the data representing text in non-image form generated via this OCR includes data representing text of the images 206 and 208 as well as text of the image 202.
In general, the valid content boundary can be determined independently of the OCR engine that may have generated the data representing the text in non-image form from the data representing the image of the text. That is, how the text in non-image form is generated from the image of the text does not affect how the valid content boundary is determined. Furthermore, the valid content boundary can be determined after the text in non-image form has been generated from the image of the text.
In addition,
If a given image corresponds to an odd-numbered page, then the valid content boundary for this given image is determined based on the images that correspond to odd-numbered pages (304). Likewise, if a given image corresponds to an even-numbered page, then the valid content boundary for this given image is determined based on the images that correspond to even-numbered pages (306). The distinction between odd-numbered and even-numbered pages is made because the left and right margins of odd-numbered pages typically are different from the left and right margins of even-numbered pages, due to the attachment of the pages to the spine of the book.
For instance,
Therefore, determining the valid content boundary for the image 352 of the even-numbered page can be performed by locating the boundary of the text—including stray marks like handwritten notes and dirt—on each even-numbered page, and averaging the boundary across these pages. Even if some of the images 352 include stray marks, on average the proper valid content boundary will be located. The same process can be performed for the image 354 of the odd-numbered page, in relation to the other odd-numbered pages. The odd-numbered pages are processed separately from the even-numbered pages, though, because the inner margins of the former are different from the latter, as explained above.
For instance,
The threshold is generally set to be greater than the amount of white space that may separate content within the image 456 of actual desired text. For example, words of the actual text are separated by single spaces, and sentences may be separated by double spaces. Paragraphs may further begin with a tab's worth of white space. As such, the threshold of white space that separates undesired content from actual content is set so that it is greater than the largest amount of these types of white space.
The methods 300, 400, and 500 that have been described can be utilized together or individually. As an example, the method 300 may be used to initially set the valid content boundary for an image of text corresponding to a page of a book. The method 400 may then be performed to fine-tune the valid content boundary set by the method 300. If there is too much extraneous content within the page, such as if the page has been extensively marked up with handwritten notes, then the user may be asked to specify the valid content boundary via the method 500. For instance, if the line-by-line determination of the valid content boundary results in the valid content boundary for more than a given number of lines deviating from the average valid content boundary for each line by more than a threshold, the user may be asked to specify the valid content boundary by the method 500.
Referring back to
If the location of the character falls outside the valid content boundary within the image of the text, then this character is removed from the data representing the text in non-image form (116). By comparison, if the location of the character falls inside the valid content boundary within the image of the text, then this character is kept within the data representing the text in non-image form (118). In this way, portions of the text in non-image form that correspond portions of the image of the text that themselves correspond to stray marks and not to the actual desired text of the image are removed, on a character-by-character basis.
In some situations, however, even if the location of the character falls outside the valid content boundary within the image of the text, the character may be retained within the data representing the text in non-image form in part 116. This is because in some cases, characters outside the valid content boundary may nevertheless be valid. To check for this situation, for instance, in part 116 the font of the character outside the valid content boundary can be compared to one or more fonts of the characters located within the boundary. If the font size of the character outside the valid content boundary matches any of these fonts, and if the character is part of a word that is found in a dictionary, then the character is in part 116 retained within the data representing the text in non-image form.
Pursuant to part 112 of the method 100, the location of each character 614 and 612 within the images 608 and 606 represented by the data 600 is determined. Each character 614 is located outside the valid content boundary 610, and therefore pursuant to part 116 is removed from the data 602 representing the text in non-image form. By comparison, each character 612 is located inside the valid content boundary 610, and therefore pursuant to part 118 is kept within the data 602 representing the text in non-image form. After performance of part 112, then, the data 602 representing the text in non-image form includes just the characters 612, and not the characters 614, which have been removed.
In conclusion,
The computer-readable medium 702 stores data 708 representing an image of text, as well as data 710 representing this text in non-image form. The OCR engine 704 may be used to generate the data 710 from the data 708. The OCR engine 704 as such may be a computer program that is stored on the non-transitory computer-readable data storage medium 702, or another computer-readable medium, and executed by the processor 701.
The logic 706 is executed by the processor 701. As such, the logic 706 may be implemented as one or more computer programs stored on the computer-readable medium 702, or another computer-readable medium. The logic 706 performs the method 100, and may perform the methods 300, 400, and/or 500 as part of performing the method 100. As such, the logic 706 determines a valid content boundary within the image of the text represented by the data 708, and removes those characters from the text in non-image form represented by the data 710 that fall outside this boundary.
Number | Name | Date | Kind |
---|---|---|---|
5201011 | Bloomberg et al. | Apr 1993 | A |
5371673 | Fan | Dec 1994 | A |
5465307 | Azumaya et al. | Nov 1995 | A |
5664210 | Fleming et al. | Sep 1997 | A |
5666552 | Greyson et al. | Sep 1997 | A |
5781198 | Korn | Jul 1998 | A |
5883986 | Kopec et al. | Mar 1999 | A |
5974199 | Lee et al. | Oct 1999 | A |
6430320 | Jia et al. | Aug 2002 | B1 |
6560376 | Kimbell et al. | May 2003 | B2 |
6640010 | Seeger et al. | Oct 2003 | B2 |
6757081 | Fan et al. | Jun 2004 | B1 |
6956587 | Anson | Oct 2005 | B1 |
7218780 | Nakayama | May 2007 | B2 |
7424672 | Simske et al. | Sep 2008 | B2 |
7450268 | Martinez et al. | Nov 2008 | B2 |
7599101 | Ogiwara et al. | Oct 2009 | B2 |
7787705 | Sun et al. | Aug 2010 | B2 |
8059982 | Carlson et al. | Nov 2011 | B2 |
8213687 | Fan | Jul 2012 | B2 |
8326078 | Reddy | Dec 2012 | B2 |
8542398 | Reddy | Sep 2013 | B2 |
20010014183 | Sansom-Wai et al. | Aug 2001 | A1 |
20020022954 | Shimohata et al. | Feb 2002 | A1 |
20040148507 | Iwamura et al. | Jul 2004 | A1 |
20050201619 | Sun et al. | Sep 2005 | A1 |
20050271285 | Kimura et al. | Dec 2005 | A1 |
20060098243 | Ahmed et al. | May 2006 | A1 |
20060279783 | Kato | Dec 2006 | A1 |
20070003157 | Eschbach et al. | Jan 2007 | A1 |
20070253031 | Fan | Nov 2007 | A1 |
20070253620 | Nagarajan et al. | Nov 2007 | A1 |
20080002893 | Vincent et al. | Jan 2008 | A1 |
20080002914 | Vincent et al. | Jan 2008 | A1 |
20080002916 | Vincent et al. | Jan 2008 | A1 |
20080018744 | Sasaki | Jan 2008 | A1 |
20080018950 | Kobayashi | Jan 2008 | A1 |
20080107338 | Furmaniak et al. | May 2008 | A1 |
20080137958 | Wang et al. | Jun 2008 | A1 |
20080232691 | Gkikas et al. | Sep 2008 | A1 |
20090040569 | Hamzy | Feb 2009 | A1 |
20090083028 | Davtchev et al. | Mar 2009 | A1 |
20090244648 | Chan et al. | Oct 2009 | A1 |
20090252437 | Li et al. | Oct 2009 | A1 |
20100189345 | Reddy et al. | Jul 2010 | A1 |
20100220930 | Sun et al. | Sep 2010 | A1 |
20100225937 | Simske et al. | Sep 2010 | A1 |
20100325535 | Reddy | Dec 2010 | A1 |
20110110604 | Reddy et al. | May 2011 | A1 |
20110320934 | Sayers et al. | Dec 2011 | A1 |
20120154872 | Reddy | Jun 2012 | A1 |
20130033521 | Karasin et al. | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
2427468 | Nov 2004 | CA |
10134081 | May 1998 | JP |
Entry |
---|
Hewlett-Packard Company et al., “Page Content Detection For The Purposes of Achieving Accurate Cropping,” Research Disclosure, published Nov. 2008. |
Jian Fan, Xiaofan Lin, Steven Simske, “A Comprehensive Image Processing Suite for Book Re-Mastering,” ICDAR05, p. 447-451. Aug. 2005. |
SunTec, “Print on Demand (POD PDF)”, http://www.suntecindia.com/date-entry-india/print-on-demand-pdf.htm, Dec. 20, 2010. |
Number | Date | Country | |
---|---|---|---|
20120163718 A1 | Jun 2012 | US |