1. Field
The present invention relates to functions traditionally associated with converting printed papers into electronic images for processing or storing within a computer system. Flatbed scanners and multifunctional devices (MFP's) are frequently used for performing this task. Embodiments of the invention relate particularly to improving scanning speeds and reducing the number of required scanning passes to process documents or other items.
2. Related Art
A document processed by optical character recognition (OCR) is initially and usually presented as an electronic image. A printed picture to be stored in a computer system is also initially and usually presented as an electronic image. Such pictures are usually obtained from a flatbed scanner or multi-function printer and scanner (MFP).
The flatbed scanner or MFP usually includes a carriage assembly that transforms a luminous flux into an electronic image, a stepping motor assembly moving carriage, and a main electronic board that forms and processes the electronic image and possibly sends the electronic image to the computer system through an interface cable, a local network cable, or wireless across a network via one or more network protocols.
When a user works with a flatbed scanner, the user first triggers a preview to see where the paper documents exist on the bed 106 of the apparatus 100. It is necessary to see how these papers or documents 102 are located on the bed 106 and the user manually selects or indicates one or more scanning areas. During a preview the scanner moves its carriage (not shown) quickly, from front to back of the bed 106 or base 104, from a starting position to an end position. At this point, during the preview, a lamp on the carriage is on. The apparatus performs the scanning and makes an electronic image of a substantially low resolution. Next, the carriage returns back to its starting position. On its way, in order to preserve lamp life and to reduce power consumption, the lamp on the carriage is turned off, and scanning is not performed. A preview image is sent to an associated computer.
Next, the user views a result of the preview on a screen of the computer associated with the scanner. The user selects one or more scanning areas and initiates a scan, typically with high resolution, to get an electronic image of sufficient quality to perform OCR, to store it as an electronic file, to print it, to send it by email, etc.
When the scanner 100 scans with high resolution, it moves its carriage again from the front to the back of the bed 106, but now this time slowly. The lamp is turned on and the scanner performs the scanning and makes one or more electronic images with a relatively high resolution or resolution that is higher than the preview scan. Next, the carriage returns back again to an initial position. In the case where a user has two or more papers for scanning on the scanner bed 106, the user is required to start scanning with a fresh or uninitiated scanning area—the scanner 100 has no way to detect where the next paper document is placed on the bed 106 or the dimensions or orientation of the paper document(s) freshly loaded onto the scanner bed 106. After another initial scan, the user again must select one or more areas for a high resolution scan before he can capture electronic images.
Consequently, users are required to operate the scanner twice per document: a preliminary scan first and then a scan at high resolution second. Also, users generally must have a computer connected or associated with the scanner in order to control the scanner and the scanning process to acquire scanned images.
On the side of the associated computer, for a collection of images, a user is required to open each image successively with the assistance of an image editing software program. The user must manually crop, deskew, remove digital noise, correct a gamma value, invent a name for the file and initiate a save operation to persist the image as a computer file. Such scenario is repeated millions of times every day. Such scenario is tedious and ripe for automation.
In response, the scanner performs a (high resolution) scan of all selected areas 520. Images or files are sent to the associated computer 522. The user usually opens the images in an image editor 524—at least to confirm that images of sufficient quality were captured. The user serially works through the captured images such as by choosing a next image 526. Often, the user optionally and manually crops images captured at the higher resolution 528, manually deskews the images 530, manually removes noise from the images 532, and manually corrects the gamma value associated with the images 534. Next, a user must invent a filename for each file 536, and save each file in an electronic store 538. The user typically works through the images sequentially until each image is saved. When the image is the last one 540, the user closes the image editor 542 and further manually processes the saved images 544 such as by performing OCR on one or more of the images, sending the images via an email client, printing the images, and sharing the images through a social networking website.
If a user works with an MFP, he cannot perform a preview. In this case, the user selects a functional button on the MFP, for example a button or menu selection corresponding to “Scan to folder” or “Scan and E-mail.” Also, the user usually must adjust or select one or more settings of the MFP. For example, the user must select a destination folder on the local network or set an e-mail address, required settings associated with the previously mentioned functions.
Typically, when a functional button is pressed by a user, the MFP moves its carriage from the front to the back of the bed with a lamp on the carriage turned on. The MFP apparatus scans its entire bed area and makes one electronic image at high resolution. Next the carriage returns back to its initial position. At this point, the lamp on the carriage is turned off, and scanning ceases.
In this scenario, the user does not have an opportunity to review an initial scan, preview the results of the MFP scanning, or select an area different from the entire bed of the MFP for scanning. Instead, he must go to his computer to view the scanned image that has been saved or emailed by the MFP. Upon viewing this image, the user can then optionally crop, deskew, OCR or perform other operations on the image. However, if the scanning were of a book, and the scanning fails to capture the appropriate usable portion of the paper document, the user must again return to the MFP to again perform scanning.
When found, the user usually opens the images in an image editor 570—at least to confirm that images of sufficient quality were captured. Often, the user optionally and manually crops images 572, manually deskews the images 574, manually removes noise from the images 576, and manually corrects a gamma value associated with the images 578. The user may close the image editor 580. Next, a user must invent a filename for each file 582 such as based on its content, and save each file according to its invented filename in an electronic store 584. The user typically works through the images sequentially until each image is saved. When finished naming the files, a user may further manually process the saved images such as by performing OCR on one or more of the images, sending the images via an email client, printing the images, and sharing the images through a social networking website.
Accordingly, there is substantial opportunity to improve the highly manual process of scanning paper documents and working with images of such documents.
Methods and devices are described for detecting boundaries of documents on flatbed and multi-function scanners on a first pass of a carriage assembly, and then performing a high resolution scan on a second pass. High resolution images of documents can then be obtained with little or no interaction normally necessary to identify areas of interest on the scanner bed. Patterns on the scanner cover or lid facilitate not only edge determination, but orientation of text and other objects, and straightening of images in preparation for OCR and related functions. Certain functions such as functions related to OCR may be automated. For example, images and electronic files derived from paper documents may be automatically cropped, deskewed, OCR'd and named consistent with content or other information derived from paper documents.
Any of a variety of patterns may be useful when applied to a scanner cover or lid. A pattern may be an image that includes one or more fragments or cyclically repeated designs or patterns. For example, a pattern may be a regular grid formed with uniform lines, a set of stripes running parallel with the scanner bed, two sets of parallel lines, a set of unevenly spaced lines, and a series of dashes, dots or one or more other shape arranged in a variety of configurations.
While the appended claims set forth the features of the present invention with particularity, the invention, together with its objects and advantages, will be more readily appreciated from the following detailed description, taken in conjunction with the accompanying drawings. Throughout, like numerals refer to like parts with the first digit of each numeral generally referring to the figure which first illustrates the particular part.
Broadly, embodiments and techniques of the present invention disclose or relate to methods for reducing time, resources and power associated with operation of a scanner or multi-function printer (MFP) or other device with scanning/image-capture capability when capturing images from paper documents. Further, certain functions may become automated.
In operation, the scanner 200 performs an initial scan across the bed 106 such as at a low resolution. Such an initial scan can be accomplished quickly. Immediately afterward, the scanner 200 can acquire or identify a cropped region 308. Then, instead of waiting for a further interaction with a user, the scanner 200 or associated computer (i.e., software or portion of the operating system active on the associated computer) can immediately perform a high resolution or second scan of the image using information, pattern or data derived from the patterned background 306. The carriage of the scanner (not shown) is not even required to return to an initial position. A single (round trip) pass—forward and backward—of the scanner 200 can yield high resolution images or images prepared for further processing without manual input from a user. After the initial pass, and after identifying regions of interest, the scanner 200 can scan one or more areas such as the cropped region 308 that likely includes one or more documents destined for capture or scanning. In post processing, a user or set of program instructions can then crop and perform other actions on the high resolution image or images.
In general, the pattern may be formed consistent with any checked paper, cross-hatched paper or lined paper. All the elements of a pattern can be in one color or different colors. The elements can be printed, applied with paint, applied to a scanner cover or lid with glue or in any other way, or formed into the plastic, metal or other material of the scanner cover or lid. The pattern may be part of a separate plate or sheet that is placed on top of or behind the items to be scanned.
In contrast to
As the carriage moves in a second, return or backward direction, the scanner captures or makes a high resolution scan at step 610. Next, skew angle, gamma and noise are automatically or programmatically determined for each document. All images corresponding to documents on the scanner are optionally and automatically cropped at step 614. Any existing skew is optionally and automatically adjusted at step 616. Noise from each document image is optionally and automatically adjusted or removed at step 618. Gamma corrections are optionally and automatically made for each document at step 620. Moreover, all of these operations 614, 616, 618, 620 can be performed for each document independently. For example, a first document can be processed with gamma 1.2 and noise removal. At the same time, a second document can be processed with gamma 1.05 and with no noise removal.
In a preferred implementation, optical character recognition (OCR) is performed on images that include text in the image at step 622. This step is optional, but is a precursor to other functions. Based on one or more characteristics of the images, each image or portion thereof is automatically associated with a type at step 624. For example, a document on the scanner may be determined to be a check based on, for example, certain characteristics such as a check number in one corner region and a signature block near one edge region.
At step 626, a context-sensitive name for each image is automatically generated. Such name may be derived from text recognized in the image. Each image is saved as one or more computer file automatically at step 628. Subsequently, one or more other functions may be automatically performed at step 630. For example, images from documents on the scanner may be sent over an email protocol to one or more email addresses. As another example, non-text-based images may be additionally saved in a raw or uncompressed format, and text-based documents may be saved as PDF files. As a third example, a first document can be classified as text, recognized and saved as a PDF file with a text layer. At the same time, a second document can be classified as a color photo and saved as JPG-encoded file that has not been subjected to character recognition. One benefit of this method includes scanning of all documents on the scanner in a single pass of the carriage. A single pass is all that is needed for each document.
Some differences with respect to scanning by flatbed scanners and MFP's are now described. One difference is the number of passes of the carriage needed to scan documents. Only one pass 710 is needed for scanning multiple documents when a background is provided behind documents. In comparison, the carriage must be activated by flatbed scanners and MFP's according to known techniques and without the use of a background.
Another difference is that automatic cropping of all scanned documents 712 may be performed. This is different from traditional methods of scanning. In traditional cases, cropping does not execute at all or it must be executed manually with an associated computer and separate software or other programmatic element. According to the invention, cropping executes automatically and is facilitated by one or more of the features of the background behind the document or documents on the scanner. Automatic cropping may be performed by the scanner. One implementation of automatic cropping includes: identifying one or more features of the background, detecting the edges of each document in the image taken from the bed of the scanner, determining coordinates associated with the edges of each document and performing a cropping operation such as by saving each reduced area associated with a respective document as an electronic image or file. Automatic cropping may be performed with software, firmware or hardware instructions implemented in the MFP or scanner or through software, firmware or hardware instructions implemented on a computer associated with the MFP or scanner. Another difference is that automatic deskewing of all scanned documents 714 may be performed. This is different from traditional methods of scanning. In traditional cases, automatic deskewing cannot be performed at all or deskewing must be executed manually. In the invention, deskewing executes automatically and is facilitated by one or more of the features of the background behind the document or documents on the scanner. One implementation of automatic deskewing includes: identifying one or more features of the background, detecting the edges of each document in the image taken from the bed of the scanner or determining a direction of elements in each document, determining a skew angle for each document, and performing a deskewing operation such as by rotating each image for a respective document according to the skew angle. A skew angle for a first document may be different from a skew angle for a second document when both the first document and second document are subjected to and present for a scan on a scanner because the first and second documents may independently be oriented on the scanner bed. Automatic deskewing may be performed with software, firmware or hardware instructions implemented in the MFP or scanner or through software, firmware or hardware instructions implemented on a computer associated with the MFP or scanner.
Another difference is that automatic optical character recognition (OCR) of scanned documents 716 may be performed. This is different from traditional methods of scanning. In traditional cases, automatic OCR cannot be performed at all or OCR must be executed manually or in a separate stage subsequent to cropping or other operation. In the invention, OCR may be executed automatically and is facilitated by one or more of the features of the background behind the document or documents on the scanner. One implementation of automatic OCR includes: identifying one or more features of the background, detecting the edges of each document in the image taken from the bed of the scanner or determining a direction of elements in each document, and performing an OCR operation for each document based on detected edges or direction of elements in teach document. Automatic OCR may be performed with software, firmware or hardware instructions implemented in the MFP or scanner or through software, firmware or hardware instructions implemented on a computer associated with the MFP or scanner.
Another difference is that automatic file naming may be performed 718. The name provided for the particular document preferably includes one or more elements derived from the body or content of the particular document. This is different from traditional methods of scanning. In traditional cases, automatic OCR cannot be performed at all or OCR must be executed manually. OCR is one source of elements that may be used to generate a name for the respective document or image. In the invention, automatic naming may be executed automatically and is facilitated by one or more of the features of the background behind the document or documents on the scanner. One implementation of automatic naming includes using text of features derived from performing OCR on the respective documents. Such implementation includes identifying one or more features of the background, determining a direction of elements in each document, performing an OCR operation for each document and generating a name for the respective documents based on the output of the OCR operation. Automatic naming may be performed with software, firmware or hardware instructions implemented in the MFP or scanner or through software, firmware or hardware instructions implemented on a computer associated with the MFP or scanner.
Gamma correction, deskew and image cleaning are optional operations. These operations can be performed for each document individually. In traditional cases, these operations usually are executed manually and cannot be automated. In one case, when an MFP is used, these operations can be executed automatically but are limited to performing these operations for all documents taken together in a single image. In the described invention, gamma correction, deskewing and image cleaning can execute automatically and can be performed for each document individually. Such may be performed with software, firmware or hardware instructions implemented in the MFP or scanner or through software, firmware or hardware instructions implemented on a computer associated with the MFP or scanner.
OCR, document classification and automated file name generation are optional operations. These operations can be performed for each document individually. In traditional, known methodologies, these operations must be executed manually. In the described invention, these operations are capable of executing automatically. A scanner or MFP may perform any of these operations with software, firmware or hardware instructions implemented in the MFP or scanner and may be triggered with a functional button selected at the start of an implementation of the invention.
Referring now to
The hardware 800 operates under the control of an operating system 814, and executes various computer software applications, components, programs, objects, modules, etc. indicated collectively by reference numeral 816 to perform the techniques described above.
In general, the routines executed to implement the embodiments of the invention, may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects of the invention. Moreover, while the invention has been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and that the invention applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution. Examples of computer-readable media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others.
Although the present invention has been described with reference to specific exemplary embodiments, it should be evident that the various modification and changes can be made to these embodiments without departing from the broader spirit of the invention. Further, one skilled in the art can practice the invention without the specific details provided.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than in a restrictive sense.
This application is a divisional of U.S. patent application Ser. No. 13/712,950, filed Dec. 12, 2012, which is a continuation in part of U.S. patent application Ser. No. 13/662,044, filed Oct. 26, 2012. This application also claims the benefit of priority under 35 USC 119 to Russian Patent Application No. 2014137823, filed Sep. 18, 2014; the disclosure of the priority applications are incorporated herein by reference. The United States Patent Office (USPTO) has published a notice effectively stating that the USPTO's computer programs require that patent applicants reference both a serial number and indicate whether an application is a continuation or continuation-in-part. See Stephen G. Kunin, Benefit of Prior-Filed Application, USPTO Official Gazette 18 Mar. 2003. The Applicant has provided above a specific reference to the application(s) from which priority is being claimed as recited by statute. Applicant understands that the statute is unambiguous in its specific reference language and does not require either a serial number or any characterization, such as “continuation” or “continuation-in-part,” for claiming priority to U.S. patent applications. Notwithstanding the foregoing, Applicant understands that the USPTO's computer programs have certain data entry requirements, and hence Applicant is designating the present application as a continuation-in-part of its parent applications as set forth above, but points out that the designations are not to be construed as commentary or admission as to whether or not the present application contains any new matter in addition to the matter of its parent application(s). All subject matter of the Related Applications and of any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications is incorporated herein by reference to the extent such subject matter is not inconsistent herewith.
Number | Name | Date | Kind |
---|---|---|---|
4314159 | Davis | Feb 1982 | A |
4370641 | Kantor et al. | Jan 1983 | A |
4875104 | Kamon | Oct 1989 | A |
5017963 | Tuhro | May 1991 | A |
5086486 | Yamada | Feb 1992 | A |
5159667 | Borrey et al. | Oct 1992 | A |
5228099 | Yamada | Jul 1993 | A |
5327261 | Hirota et al. | Jul 1994 | A |
5384621 | Hatch et al. | Jan 1995 | A |
5506918 | Ishitani | Apr 1996 | A |
5510908 | Watanabe et al. | Apr 1996 | A |
5528387 | Kelly et al. | Jun 1996 | A |
5669007 | Tateishi | Sep 1997 | A |
5784487 | Cooperman | Jul 1998 | A |
5818976 | Pasco et al. | Oct 1998 | A |
5900950 | Hsu | May 1999 | A |
5912448 | Sevier et al. | Jun 1999 | A |
5926823 | Okumura et al. | Jul 1999 | A |
5937084 | Crabtree et al. | Aug 1999 | A |
5940544 | Nako | Aug 1999 | A |
6111244 | Wang | Aug 2000 | A |
6185011 | William | Feb 2001 | B1 |
6208438 | Watanabe et al. | Mar 2001 | B1 |
6229630 | Kim | May 2001 | B1 |
6377704 | Cooperman | Apr 2002 | B1 |
6388778 | Ko-Chien | May 2002 | B1 |
6430320 | Jia et al. | Aug 2002 | B1 |
6433896 | Ueda et al. | Aug 2002 | B1 |
6556721 | Wang et al. | Apr 2003 | B1 |
6574375 | Cullen et al. | Jun 2003 | B1 |
6636649 | Murata et al. | Oct 2003 | B1 |
6694053 | Burns et al. | Feb 2004 | B1 |
6922697 | Suehira | Jul 2005 | B1 |
7006708 | Nako et al. | Feb 2006 | B1 |
7085012 | Schweid et al. | Aug 2006 | B2 |
7157910 | Van Den Brink | Jan 2007 | B2 |
7277579 | Huang | Oct 2007 | B2 |
7305613 | Oezgen | Dec 2007 | B2 |
7370059 | Geraud | May 2008 | B2 |
7392473 | Meunier | Jun 2008 | B2 |
7397587 | Os et al. | Jul 2008 | B2 |
7457012 | Fang | Nov 2008 | B2 |
7675021 | Lapstun | Mar 2010 | B2 |
7764408 | Ohama et al. | Jul 2010 | B2 |
7831098 | Melikian | Nov 2010 | B2 |
7839543 | Watanuki | Nov 2010 | B2 |
RE42234 | Chiu | Mar 2011 | E |
8064729 | Li et al. | Nov 2011 | B2 |
8174737 | Kato et al. | May 2012 | B2 |
8189961 | Nijemcevic et al. | May 2012 | B2 |
8259370 | Iwatsuka | Sep 2012 | B2 |
8260049 | Deryagin et al. | Sep 2012 | B2 |
8270044 | Seo | Sep 2012 | B2 |
8452132 | Isaev et al. | May 2013 | B2 |
8520266 | Elliot et al. | Aug 2013 | B2 |
8553280 | Hoover et al. | Oct 2013 | B2 |
8649052 | Hoover et al. | Feb 2014 | B2 |
8873111 | Sakai et al. | Oct 2014 | B2 |
20040264774 | Anisimovich et al. | Dec 2004 | A1 |
20050078331 | Guan | Apr 2005 | A1 |
20060071392 | Shyu | Apr 2006 | A1 |
20060235855 | Rousseau et al. | Oct 2006 | A1 |
20060268346 | Van Os | Nov 2006 | A1 |
20060282442 | Lennon et al. | Dec 2006 | A1 |
20060290789 | Ketola | Dec 2006 | A1 |
20070019255 | Sakakibara et al. | Jan 2007 | A1 |
20070253031 | Fan | Nov 2007 | A1 |
20070273935 | Ide et al. | Nov 2007 | A1 |
20070285739 | Nakano et al. | Dec 2007 | A1 |
20090067005 | Chelvayohan et al. | Mar 2009 | A1 |
20090087094 | Deryagin et al. | Apr 2009 | A1 |
20090323131 | Toyoda | Dec 2009 | A1 |
20100002273 | Schmidt et al. | Jan 2010 | A1 |
20100215272 | Isaev et al. | Aug 2010 | A1 |
20100271646 | Morimoto et al. | Oct 2010 | A1 |
20110085198 | Son et al. | Apr 2011 | A1 |
20110181920 | Kim | Jul 2011 | A1 |
20110216378 | Poon et al. | Sep 2011 | A1 |
20130054595 | Isaev et al. | Feb 2013 | A1 |
20130063789 | Iwayama et al. | Mar 2013 | A1 |
20130268528 | Kawano | Oct 2013 | A1 |
20150186629 | Ishii | Jul 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20150015926 A1 | Jan 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13712950 | Dec 2012 | US |
Child | 14501658 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13662044 | Oct 2012 | US |
Child | 13712950 | US |