The instant application claims priority to Italian Patent Application No. TO2011A000261, filed Mar. 25, 2011, which application is incorporated herein by reference in its entirety.
An embodiment of the disclosure relates to a scanner apparatus. Certain embodiments may relate to a scanner apparatus integrated with a printer.
A key part of printers and other conventional image-sensor devices is the Contact Image Sensor (CIS) scan bar, which transforms an image on paper into an electronic image. A CIS scan bar may be widely used for facsimile (fax) machines, optical scanners, and portable applications e.g. portable scanners.
Over the years, the cost of CMOS imaging-sensor arrays has decreased, and their performance level increased: these sensors may thus be used in the place of conventional CIS scan bars, giving rise to cheaper solutions without any adverse impact on scanner size.
Different solutions have been proposed in order to use CMOS/CCD imaging-sensor arrays to scan a document.
For instance, DE-A-102006010776, which is incorporated by reference, discloses an arrangement including four fixed CCD-sensors, which are located under a glass for supporting the documents to be scanned and which operate on the basis of a pre-calibrated evaluation algorithm to form an entire image.
Various documents disclose different kinds of image-sensor carriages for mounting image reading means in combination with a drive unit (driving motor) to move the carriage.
For instance, GB-A-2336734, which is incorporated by reference, discloses an image sensor arranged parallel to the short sides of a rectangular lower frame to capture the image of a scanned object placed on a transparent plate mounted on a rectangular upper frame. A rod-like guiding member is provided orthogonal to the longitudinal holder to guide the movement of the image sensor.
In the solution disclosed in JP-A-2005331533, which is incorporated by reference, an image scanner is equipped with a carriage on which an image sensor is mounted. A driving motor moves the carriage in a sub-scanning direction via a toothed timing belt.
US-A-2006/098252, which is incorporated by reference, discloses a drive device for a scanner which includes an elongate guiding unit mounted in a base and disposed under an image sensor carriage. A roller unit is mounted on a bottom side of the image sensor carriage and a driving unit drives the image sensor carriage in a second direction with respect to the base.
Documents such as US-A-2008/174836 and JP-A-20060245172, which are incorporated by reference, disclose a scanner device adapted to scan an object and generate image data; the scanner device includes an image sensor and a movement unit which moves in a sub-scan direction a carriage carrying the image sensor.
EP-A-0 886 429, which is incorporated by reference, discloses an image input/output apparatus capable of printing and reading images and a cartridge carriage for reading an original with a simple control: the system uses a camera module which replaces the ink cartridge, sharing the same circuitry, which may turn out to be critical for maintaining the same speed for printing and as regards manual replacement of the cartridges.
Document CN-A-201286132, which is incorporated by reference, discloses a planar-image sensor, high-speed scanner with a reading function, and a copying machine containing an image part, at the bottom of a workbench, which includes n sets of image detection parts and a set of image reading parts; a light-source part above the image part; and a reflection part above the light-source part. A main drawback of this solution may lie in that too many cameras may be needed to cover the entire document area.
Document US-A-2009/0021798, which is incorporated by reference, discloses a scanner operating system with a single camera module. Such an arrangement is implemented in an “All-in-One” (AiO) product traded by Lexmark® under the commercial designation Genesis, which uses a single fisheye lens. A main drawback of this arrangement lies in the negative impact on system height.
In brief, the idea of using one or more sensors (fixed or in motion) to scan an image (or part of an image) has been largely adopted. If the image sensor is intended to be moved in operation, these arrangements almost inevitably involve the use of an additional carriage for the sensor.
An embodiment dispenses with the intrinsic drawbacks of the arrangements considered in the foregoing.
An embodiment is achieved by an apparatus, a corresponding method, and a computer program product, loadable in the memory of at least one computer and including software code portions capable of implementing the steps of the method when the product is run on at least one computer.
Certain embodiments may exploit the ink cartridge carriage of a printer of the “All in One” (AiO) type to move the scanner module, which may include a set of aligned cameras, without the need of another sensor carriage.
Certain embodiments make it possible to compose the final document by fusing (“stitching”) together various acquired portions of the document.
Various embodiments will now be described, by way of example only, with reference to the annexed figures, in which:
Illustrated in the following description are various specific details aimed at an in-depth understanding of the embodiments. The embodiments may be obtained without one or more specific details, or through other methods, components, materials etc. In other cases, known structures, materials or operations are not shown or described in detail to avoid obscuring the various aspects of the embodiments. Reference to “an embodiment” in this description indicates that a particular configuration, structure or characteristic described regarding the embodiment is included in at least one embodiment. Hence, expressions such as “in an embodiment”, possibly present in various parts of this description do not necessarily refer to the same embodiment. Furthermore, particular configurations, structures or characteristics may be combined in any suitable manner in one or more embodiments. References herein are used for facilitating the reader and thus they do not define the scope of protection or the range of the embodiments.
As used herein, the designations “scanner apparatus” will apply to any type of apparatus adapted to provide a scanning function of, e.g., printed matter such as text and figures, possibly in conjunction with other functions such as, e.g., printing, copying, transmitting/receiving, or processing. Save for what is disclosed in detail in this disclosure, such scanning apparatus is conventional in the art, thus making it unnecessary to provide a more detailed description herein.
In the schematic representation of
Scanning is performed by a sensor unit 16 (of any known type) to which is imparted a scanning movement (see the double arrow S in
Reference 20 denotes a flexible cable or “flex” which carries signals between the moving sensor/carriage unit 16 and the stationary portion of apparatus 10.
As already indicated, this general structure is conventional in the art, thus making it unnecessary to provide a more detailed description herein.
The scanning movement S enables the scanning window WA of the sensor 16 to subsequently cover (i.e. “frame”) various portions of the object D being scanned (see e.g. 1, 3; 5, 7 o 2, 4; 6, 8 in
In certain embodiments, the sensor unit 16, such as e.g. one or more VGA (Video Graphics Array) module or modules, may be mounted directly on the ink cartridge carriage as provided in apparatus 10 configured for acting also as a printer (e.g. in photocopiers, facsimile apparatus, and the like).
In certain embodiments, the carriage 18 carrying the sensor unit 16 is the same carriage carrying a printer unit (22) including one or more ink reservoirs.
In certain embodiments, the exemplary integrated scanner apparatus considered herein may thus include a support surface 14 for objects to be scanned (e.g. a document D) as well as a scanner unit 16 to perform a scanning movement S relative to the support surface 14 to capture images of portions of objects D to be scanned. A printer unit 22 is carried by a carriage 18 mobile with respect to the support surface 14; the scanner unit 16 is thus carried by the same carriage 18 carrying the printer unit 22 and is thus imparted the scanning movement S by the carriage 18.
In certain embodiments, the printer unit 22 carried by the carriage 18 includes at least one ink reservoir.
In certain embodiments, a number of “shots” (i.e., partial images) of the material being scanned, such as the document D, may be taken as this common carriage 18 is moved (see arrow S). These shots may then be fused or “stitched” together (for example, via software) to produce a final complete image CI. The resolution may be determined by the number of shots taken and the distance from the sensor unit 16 to the document D.
For instance, in certain embodiments, the sensor unit 16 may include two modules 16A, 16B, so that (two) sets of different shots (namely 1, 3, 5, 7 for the first module and 2, 4, 6, 8 for the second module) will be taken during a single stroke of the carriage 18 and fused (i.e. combined or “stitched”) to obtain a final image CI.
Certain embodiments may use a single module producing all of the partial images as follows: images 1,3,5,7 are captured while the carriage is moving in one direction, followed by a translation of the module in the orthogonal direction (which can be achieved purely by mechanical means), followed by a carriage movement in the opposite direction during which partial images 8,6,4,2 are captured, in that order. This approach trades cost (a single module) for time (partial images are captured serially instead of two at a time, roughly doubling the total capture time)
In certain embodiments, the exemplary integrated scanner apparatus considered herein may thus include at least one scanner module, each module having a capture window WA (
Similarly, in certain embodiments, the exemplary integrated scanner apparatus considered herein may include a plurality of scanner modules (e.g. two scanner modules 16A, 16B); during the scanning movement S imparted by the carriage 18, each sensor module 16A, 16B will produce a respective set of partial images (that is images 1, 3, 5, 7 for the module 16A and images 2, 4, 6, 8 for the module 16B) of the objects D being scanned. A processing module 26 may be provided to fuse the respective sets of partial images (1, 3, 5, 7 with 2, 4, 6, 8, respectively) into a complete image (CI).
In certain embodiments, as schematically represented in
In certain embodiments, absolute orientation and straightening may be applied, as better detailed in the following.
In certain embodiments, two modules or cameras 16A, 16B with a HFoV (Horizontal Field of View) of 60 degrees, located, e.g., 96 mm from the platen/document plane, may be able to capture a smallest dimension of an A4 or a US letter document (8.5×11 inches).
In certain embodiments, a quick live preview may be performed as schematically exemplified in
In certain embodiments, the exemplary integrated scanner apparatus considered herein may thus provide for the scanner unit 16 being selectively tiltable to a preview scanning position wherein the scanner unit 16 images a document to be scanned from a stationary position.
As regards signal generation/processing, certain embodiments may adopt the architecture exemplified in the block diagram of
Certain embodiments may admit at least two main operational modes, namely an open loop mode and a closed loop mode.
In certain embodiments, in the open loop mode, as schematically represented in
In the open loop case, the scanner module 30 and the printing module 32 may be considered completely independent of each other, i.e. the scanner unit 16 will be operated independently of any feedback on the current position of the printing module 32 as provided by the motion sensor/encoder 34 associated with the carriage 18.
In certain embodiments, in the closed loop mode, as schematically represented in
In certain embodiments, in the closed loop mode, the scanner module 30 may exploit the information provided by the encoder 34 through the printer ASIC 32. In this mode, the real position may be used by the stitching module in the processing pipeline (26 in
In certain embodiments, the carriage 18 may thus have associated therewith a motion sensor 34 providing a feedback signal representative of the position of the carriage 18; the scanner unit 16 is then operable as a function of the feedback signal.
In certain embodiments, the processing pipeline 26 may have the structure represented in
In
In certain embodiments as exemplified in
In certain embodiments, the pipeline 26 may operate the first time with a re-sized version of the images produced in a sub-step 102 to obtain a preview, while the second time it works with the full resolution version of the images.
In certain embodiments, to these (partial) images the following blocks/processing steps may be applied:
Stitching (i.e. fusing together) the images (as derived from the memory 24) is performed in a block/step 110 using the parameters estimated while also possibly applying seamless blending to avoid seams between images.
The complete image thus obtained may then be subjected to the following blocks/processing steps:
Those of skill in the art will otherwise appreciate that, while representative of the best mode, the embodiment of the pipeline depicted in
In certain embodiments, the pipeline may be supplemented with further, additional steps. Also, in certain embodiments, one or more of the steps considered herein may be absent or performed differently: e.g., (by way of non-limiting example) the step 114 may be performed off-line whenever this appears preferable (errors in reconstruction).
As schematically represented in
In various embodiments, two kinds of tools may be used, namely multiplane-camera calibration and lens-distortion-model estimation, respectively.
In multiplane-camera calibration, all the intrinsic parameters (focal length, principal point, and distortion parameters) may be calculated using several images (usually 15-20), taken using a checkerboard, pasted on a rigid planar surface, in different positions. Intrinsic parameters may be estimated by using the Bouguet calibration Matlab toolbox (see http://www.vision.caltech.edu/bougueti/calib_doc/index.html) mainly based on the work Z. Zhang: “Flexible Camera Calibration by Viewing a Plane from Unknown Orientations,” Seventh International Conference on Computer Vision (ICCV), Volume 1, pp. 666-673, 1999, which is incorporated by reference. Other tools can be used in the same way, such as disclosed, e.g., in http://www.ics.forth.gr/˜xmpalt/research/camcalib_wiz/index.html and http://matt.loper.org/CamChecker/CamChecker_docs/html/index.html both based on the above mentioned work of Z. Zhang, and both incorporated by reference.
CAMCAL may be another tool (see http://people.scs.carleton.ca/˜c_shu/Research/Projects/CAMcal/, which is incorporated by reference and which uses a different approach as disclosed, e.g. in A. Brunton, et al.: “Automatic Grid Finding in Calibration Patterns Using Delaunay Triangulation”, Technical Report NRC-46497/ERB-1104, 2003, which is incorporated by reference, and an ad-hoc test pattern.
If the lens-distortion-model estimation is used, only distortion parameters may be calculated using a single pattern image, usually tracing lines on the pattern.
To estimate the parameters, standard methodologies can be exploited, such as the CMLA tool (see e.g. http://mw.cmla.ens-cachan.fr/megawave/demo/lens_distortion/, which is incorporated by reference). In certain embodiments, a checkerboard pattern may be used, taken exactly in front of the camera, without rotation to simplify the work of tracing horizontal and vertical lines. The captioned tool will know where these points are actually located (by deriving this information from the grid of the pattern image) and where these points should be (thanks to user manual lines specification), and simply solve a system to determine the distortion parameters.
In certain embodiments, a color-correction procedure may be optionally applied (possibly after the camera—i.e. sensor module—calibration) to correct shading discontinuities. In certain embodiments, a Linear Histogram Transform (LHT) may be adopted, forcing selected areas to have the same mean value and variance.
By way of example, the following equations may be used to gather statistics on a selected area:
and correction may be performed as follows:
In certain embodiments, keystone correction may be another optional step, possibly applied after color correction.
In certain embodiments, before applying keystone correction, in an offline tuning phase, a rotation step may be performed to align the image on axis. To do this, the Hough transform may be applied on a chessboard-patch gradient image (as obtained, e.g., by a simple horizontal Sobel filtering).
As regards keypoint detection and matching (block 104 in
In various embodiments, the first step/phase may extract the characteristic features for each image and match these features for each couple of images to obtain the correspondence points, while the second step may filter the obtained points to be in line with the chosen model (rigid, affine, homographic and so on).
As already indicated, in certain embodiments the features may be extracted using the SIFT or SURF transforms as disclosed, e.g., in D. Lowe: “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision 60 (2): 91-110, 2004 and H. Bay, et al.: “SURF: Speeded Up Robust Features”, Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346-359, 2008), which are incorporated by reference, and the matches may be made accordingly.
In certain embodiments, a high number of outliers may be easily noticed in the case of final matches obtained via SIFT.
In certain embodiments, in order to remove outliers, the final matches obtained in the previous step may be filtered through RANSAC (Random Sample On Consensus Set).
In certain embodiments, this technique may involve the following steps:
Certain embodiments may include global registration (block 108 of
In certain embodiments, in global registration of a set of images, all overlapping pairs should be considered. In the example of
The registration may take into account simultaneous warping effects. One image, for example A, may be used as a “world” reference (i.e. all images may be registered with respect to A).
In an example, the system constraints will be:
HAA=I
HAAx1=HABx2
HAAx3=HACx4
HAAx5=HADx6
HABx7=HACx8
HABx9=HADx10
HACx11=HADx12 (5)
where Hij denotes the motion matrix to register image j on image i.
The corresponding motion models may be rigid (6), affine (7), and homographic (8), respectively:
In case of rigid and affine motion, the constraints may lead to an over-determined system of linear equations of the kind Ax=B, which can be easily solved with least squares methods.
In certain embodiments, image stitching (step 110 of
An example of this kind of function is shown in
In certain embodiments, global straightening (block 112 of
In certain embodiments, in order to execute this step a pattern test image is used, which may be produced by composing blank documents and/or documents with points with lack of interest to be inserted under hidden parts of the system.
In certain embodiments, this may also help in the point matching step. In certain embodiments, by using on the borders ‘wordart like’ letters and numbers, the test pattern image thus created may contain black squares. These squares may be matched with known pattern using SAD (Sum of Absolute Difference) computation. Once the corners (at least four) are found, the correct rectangle can be estimated and the correction (using homographic model) performed. Homographic parameters may be estimated by means of linear system between matched and ideal corners.
In certain embodiments, both the input image and the test image may be subjected to sub-sampling (for example by two) in order to speed-up processing.
Certain embodiments may give rise to a low-cost scanning system using one or more sensors in movement to scan the image, without the need of another sensor carriage.
In certain embodiments a processing pipeline may be used which can be effectively implemented in software form.
Certain embodiments exhibit at least one of the following advantages:
Without prejudice to the underlying principles of the disclosure, the details and embodiments may vary, even significantly, with respect to what has been described herein by way of non-limiting example only, without departing from the scope of the disclosure.
From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the disclosure. Furthermore, where an alternative is disclosed for a particular embodiment, this alternative may also apply to other embodiments even if not specifically stated.
Number | Date | Country | Kind |
---|---|---|---|
TO2011A0261 | Mar 2011 | IT | national |
Number | Name | Date | Kind |
---|---|---|---|
4749296 | Bohmer | Jun 1988 | A |
5515181 | Iyoda et al. | May 1996 | A |
5812172 | Yamada | Sep 1998 | A |
5933248 | Hirata | Aug 1999 | A |
5987194 | Kaneko | Nov 1999 | A |
6057936 | Obara | May 2000 | A |
6264384 | Lee | Jul 2001 | B1 |
6459819 | Nakao | Oct 2002 | B1 |
7433090 | Murray | Oct 2008 | B2 |
8199370 | Irwin et al. | Jun 2012 | B2 |
20020186425 | Dufaux et al. | Dec 2002 | A1 |
20030063229 | Takahashi et al. | Apr 2003 | A1 |
20030063329 | Kaneko et al. | Apr 2003 | A1 |
20060098252 | Duan | May 2006 | A1 |
20080079957 | Roh | Apr 2008 | A1 |
20080174836 | Yoshihisa | Jul 2008 | A1 |
20090021798 | Abahri | Jan 2009 | A1 |
20100039682 | Peot et al. | Feb 2010 | A1 |
20100284046 | Fah et al. | Nov 2010 | A1 |
Number | Date | Country |
---|---|---|
201286132 | Aug 2009 | CN |
102006010776 | Sep 2007 | DE |
0497440 | Aug 1992 | EP |
0837594 | Apr 1998 | EP |
0886429 | Dec 1998 | EP |
2385112 | Oct 1978 | FR |
2336734 | Oct 1999 | GB |
2005331533 | Dec 2005 | JP |
20060245172 | Sep 2006 | JP |
2008065217 | Mar 2008 | JP |
2009-60668 | Mar 2009 | JP |
Entry |
---|
Search Report for Italian Application No. TO20110261, Ministero dello Sviluppo Economico, Nov. 3, 2011, pp. 2. |
Camera Calibration Toolbox for Matlab, http://www.vision.caltech.edu/bouguetj/calib—doc/index.html, pp. 4. |
Haris Baltzakis—camcalb—wiz, Camera Calibration Tool (camcalib—wiz), http://www.ics.forth.gr/˜xmpalt/research/camcalib—wiz/index.html, pp. 1. |
CamChecker, A Camera Calibration Tool, http://matt.loper.org/CamChecker/CamChecker—docs/html/index.html, p. 1. |
Zhengyou Zhang, Flexible Camera Calibration by Viewing a Plane From Unknown Orientations, Seventh International Conference on Computer Vision (ICCV), vol. 1, pp. 666-673, 1999. |
CAMcal—A Camera Calibration Program, http://people.scs.carleton.ca/˜c—shu/Research/Projects/CAMcal/, p. 1. |
Chang Shu, Alan Brunton, Mark Fiala, Automatic Grid Finding in Calibration Patterns Using Delaunay Triangulation, Technical Report NRC-464971ERB-1104 Printed Aug. 2003, pp. 17. |
IPOL demo—Algebraic Lens Distortion Model Estimation, Algebraic Lens Distortion Model Estimation, http://mw.cmla.ens-cachan.fr/megawave/demo/lens—distortion/, p. 1. |
David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, Computer Science Department University of British Columbia, Vancouver, D.C., Canada, Jan. 5, 2004, The International Journal of Computer Vision, 2004.pp. 28. |
Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, SURF: Speeded Up Robust Features, ETH Zurich Katholieke Universiteit Leuven, Computer Vision and Image Understanding (CVIU), vol. 110, No. 3, pp. 346-359, 2008. |
Zhang, “A Flexible New Technique for Camera Calibration,” Technical Report, MSR-TR-98-71, IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11):1330-1334, 2000, 22 pages. |
Number | Date | Country | |
---|---|---|---|
20120281244 A1 | Nov 2012 | US |