This patent application is based on and claims priority pursuant to 35 U.S.C. § 119(a) to Japanese Patent Application No. 2020-038264, filed on Mar. 5, 2020, in the Japan Patent Office, the entire disclosure of which is hereby incorporated by reference herein.
Embodiments of the present disclosure relate to a reading device, an image processing apparatus, a method of detecting a feature amount, and a non-transitory recording medium storing instructions for executing a method of detecting a feature amount.
Conventionally, an image processing technique for detecting an edge between a document and a background from an image and correcting an inclination and a position of the document based on the detected edge between the document and the background is known.
There is a known technique in which an infrared light low reflection portion is provided as a background, and an edge between a document and the background is detected based on an acquired infrared image so that the edge between the document and the background is extracted.
An exemplary embodiment of the present disclosure includes a reading device including a light source, an image sensor, a background board, and an image processing circuitry. The light source irradiates visible light and invisible light to a subject. The image sensor receives the visible light and the invisible light, each of which is reflected from the subject, to capture a visible image and an invisible image. The background board is provided in an image capturing range of the image sensor. The background board is a background portion. The image processing circuitry detects a feature amount of the subject and the background portion from at least one of the visible image and the invisible image.
A more complete appreciation of the disclosure and many of the attendant advantages and features thereof can be readily obtained and understood from the following detailed description with reference to the accompanying drawings, wherein:
The accompanying drawings are intended to depict example embodiments of the present disclosure and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted.
The terminology used herein is for describing particular embodiments only and is not intended to be limiting of the present disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes” and/or “including”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. In describing preferred embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that have the same function, operation in a similar manner, and achieve a similar result.
Hereinafter, embodiments of a reading device, an image processing apparatus, and a method of detecting a feature amount are described in detail with reference to the attached drawings.
The image forming apparatus 100 includes an image reading device 101 serving as a reading device, an automatic document feeder (ADF) 102, and an image forming device 103 provided below the image reading device 101. The image forming device 103 is configured to form an image. In order to describe an internal configuration of the image forming device 103.
The ADF 102 is a document supporter that positions, at a reading position, a document including an image to be read. The ADF 102 automatically feeds the document placed on a placement table to the reading position. The image reading device 101 reads the document fed by the ADF 102 at a predetermined reading position. The image reading device 101 has, on a top surface, a contact glass that is the document supporter, on which a document is placed, and reads the document on the contact glass that is at the reading position. Specifically, the image reading device 101 is a scanner including a light source, an optical system, and a solid-state imaging element such as a complementary metal oxide semiconductor (CMOS) image sensor inside, and reads, by the solid-state imaging element through the optical system, reflected light of the document, which is illuminated, or irradiated, by the light source.
The image forming device 103 includes a manual feed roller pair 104 for manually feeding a recording sheet, and a recording sheet supply unit 107 for supplying the recording sheet. The recording sheet supply unit 107 includes a mechanism for feeding out the recording sheet from multi-stage recording paper feed cassettes 107a. The recording sheet thus supplied is sent to a secondary transfer belt 112 via a registration roller pair 108.
A secondary transfer device 114 transfers a toner image from an intermediate transfer belt 113 onto the recording sheet conveyed on the secondary transfer belt 112.
The image forming device 103 also includes an optical writing device 109, an image forming unit (for yellow (Y), magenta (M), cyan (C), and black (K)) 105 employing a tandem system, the intermediate transfer belt 113, and the secondary transfer belt 112. Specifically, in an image forming process, the image forming unit 105 renders an image written by the optical writing device 109 as a toner image and forms the toner image on the intermediate transfer belt 113.
Specifically, the image forming unit (for Y, M, C, and K) 105 includes four photoconductor drums (Y, M, C, and K) in a rotatable manner, and image forming elements 106 each including a charging roller, a developing device, a primary transfer roller, a cleaner unit, and a static eliminator around the respective photoconductor drums. The image forming element 106 functions on each photoconductor drum, and the image on the photoconductor drum is transferred onto the intermediate transfer belt 113 by each primary transfer roller.
The intermediate transfer belt 113 is arranged to be stretched by a drive roller and a driven roller at a nip between each photoconductor drum and each primary transfer roller. The toner image primarily transferred onto the intermediate transfer belt 113 is secondarily transferred onto the recording sheet on the secondary transfer belt 112 by a secondary transfer device as the intermediate transfer belt 113 runs. The recording sheet is conveyed to a fixing device 110 as the secondary transfer belt 112 runs, and the toner image is fixed as a color image on the recording sheet. Finally, the recording sheet is discharged onto an output tray disposed outside a housing of the image forming apparatus 100. Note that, in a case of duplex printing, a reverse assembly 111 reverses the front and back sides of the recording sheet and sends out the reversed recording sheet onto the secondary transfer belt 112.
The image forming device 103 is not limited to the one that forms an image by an electrophotographic method as described above. The image forming device 103 may be one that forms an image by an inkjet method.
Next, a description is given of the image reading device 101.
The light source 2 is configured as a light source for visible/invisible light. In the description, the invisible light is light having a wavelength equal to or less than 380 nm or equal to or more than 750 nm. That is, the light source 2 is an illumination unit that irradiates a subject and a background portion 13 with the visible light and the invisible light (for example, near infrared (NIR) light).
Further, the image reading device 101 is provided with the background portion 13 that is a reference white board, on the upper surface. Hereinafter, the background portion 13 may be referred to as a background board 13. More specifically, the background portion 13 is provided on an opposite side to the light source 2, which is the illumination unit, with respect to the subject, in an image capturing range of the imaging unit 22.
In a reading operation, the image reading device 101 irradiates light upward from the light source 2 while moving the first carriage 6 and the second carriage 7 from a standby position (home position) in a sub-scanning direction (direction A). The first carriage 6 and the second carriage 7 cause reflected light from a document 12, which is the subject, to be imaged on the imaging unit 22 via the lens unit 8.
Further, when the power is turned on, the image reading device 101 reads reflected light from the reference w % bite board (background board) 13 to set a reference. That is, the image reading device 101 moves the first carriage 6 directly below the reference white board (background board) 13, turns on the light source 2, and causes the reflected light from the reference white board 13 to be imaged on the imaging unit 22, thereby performing a gain adjustment.
The imaging unit 22 images visible and invisible wavelength ranges. In the imaging unit 22, pixels that convert incident light level into electric signals are arranged. The pixels are arranged in a matrix, and the electric signals each of which is obtained from corresponding one of the pixels are transferred to a signal processing unit 222, which is arranged in a subsequent stage (see
Although an image reading device of a reduction optical system is applied as the image reading device 101 of the present embodiment, no limitation is intended thereby. Alternatively, an equal magnification optical system (contact optical system: contact image sensor (CIS) type) may be used, for example.
The imaging unit 22 is a sensor for a reduction optical system, such as a CMOS image sensor, for example. The imaging unit 22 includes a pixel unit 221 and the signal processing unit 222.
In the present embodiment, the imaging unit 22 is described as having a four-line configuration as an example, but the configuration is not limited to the four-line configuration. In addition, a circuit configuration in a subsequent stage of the pixel unit 221 is not limited to the configuration illustrated in
The pixel unit 221 has pixel groups corresponding to four lines, in each of which a plurality of pixel circuits each being configured as a pixel is arranged in a matrix. The signal processing unit 222 processes a signal output from the pixel unit 221 as appropriate and transfers the signal to the image processor 20 arranged in a subsequent stage.
The image processor 20 performs various types of image processing on image data according to a purpose of use.
The feature amount detection unit 201 detects a feature amount of the document 12, which is the subject, with respect to the visible image or the invisible image obtained by the imaging unit 22.
A description is given below of selecting a visible component as an extraction target for the feature amount.
As illustrated in
That is, the feature amount detection unit 201 compares, with respect to the invisible light and the visible light, the differences in the spectral reflection characteristics between the background portion 13 and the document 12, which is the subject, and the selected visible component as the extraction target for the feature amount includes a component that has the largest difference in the spectral reflection characteristics from the invisible light among the visible light (a plurality of components of the visible light). In general, a feature amount of a green (G) component that has a wide wavelength range is often used from the visible image. However, in the case of the example illustrated in
The feature amount detection unit 201 is not limited to extracting, as the feature amount used as a visible component, the one from the B component alone, but the feature amount used as a visible component may include a part, which is a component having the largest component value among the RGB components, for example.
Further, when the document 12, which is the subject, has variations in the spectral reflection characteristics, the feature amount detection unit 201 may determine a visible component to be selected as the extraction target for the feature amount by measuring a representative from the variations of the document 12, which is the subject, or taking an average of measurement results.
A description is given below of a case where the background portion 13 is an invisible light low reflection portion.
As described above, the background portion 13 has the invisible light low reflection portion that diffusely reflects the visible light and reflects the invisible light at a lower reflectance than that of the visible light, and accordingly, there is a remarkable difference in a read value of the background between the visible image and the invisible image, resulting in performing robust edge detection.
A description is given below of a case where the feature amount detection unit 201 extracts an edge of the document 12, which is the subject, as a feature amount.
As the processing described above, the feature amount detection unit 201 extracts the one or more edges of the document 12, which is the subject, as a feature amount, thereby detecting the region of the document 12, which is the subject.
Next, a description is given of size detection of the document 12, which is the subject.
The size information detected in this way is usable for error detection or image correction processing, which is described later. Regarding the error detection, for example, in a case of scanning with a multifunction peripheral, when a size that is different from a document size set in advance by a user is detected, the user is notified to set a document of the correct size.
As described above, according to the present embodiment, with respect to the visible image or the invisible image, by detecting the feature amount of the document 12, which is the subject, and the background portion 13 from at least one of the visible image and the invisible image, information that is not obtainable from the visible image is obtainable from the invisible image, resulting in performing the stable edge detection between the document and the background regardless of the type of the document.
In addition, the imaging unit 22 receives the visible light and the invisible light that are reflected from the document 12, which is the subject, and captures the visible image and the invisible image. This allows the imaging unit 22 to read an image with a simple configuration.
In addition, since the invisible light and the invisible image are the infrared light and the infrared image, respectively, the image is readable with a simple configuration.
A description is now given of a second embodiment.
The second embodiment is different from the first embodiment in that the feature amount is extracted from each of the visible image and the invisible image, and selection or combination among the feature amounts extracted from both of the visible image and the invisible image are automatically performed. In the following description of the second embodiment, the description of the same parts as in the first embodiment will be omitted, and those different from the first embodiment will be described. In addition, in the description of the second embodiment, the elements, functions, processes, and steps that are the same or substantially the same as those described in the first embodiment are denoted by the same reference numerals or step numbers, and redundant descriptions thereof are omitted below.
As described above, the feature amount detection unit 201 detects, with respect to the visible image or the invisible image obtained by the imaging unit 22, the feature amount of the document 12, which is the subject, and the background portion 13, which is detected from at least one of the visible image and the invisible image.
The feature amount selecting/combining unit 202 selects one among or combines the feature amounts each detected from a corresponding image, based on the feature amount of the document 12, which is the subject, and the background portion 13, which is detected from at least one of the visible image and the invisible image by the feature amount detection unit 201.
More specifically, the feature amount selecting/combining unit 202 automatically performs the selection processing described with reference to
By performing the OR processing for each edge extracted from the invisible image and the visible image, a portion where the edge is not able to be taken in one of the images is complemented by the other one of the images. For example, as illustrated in
Accordingly, the feature amount selecting/combining unit 202 combines the edge (edge portion) of the black region in the visible image and the edge (edge portion) of the white region in the invisible image to extract the edge of the entire document that is not obtained by one of the images alone.
In this way, the feature amount selecting/combining unit 202 performs the OR processing on the edges detected from the invisible image and the visible image to combine the edges. Accordingly, there is a portion where the edge is detectable in either the visible image or the invisible image, and thereby the edge between the document 12, which is the subject, and the background portion 13 is detectable in most of the portions of the edge, namely the number of edge detectable portions increases.
Next, a description is given of giving priority to the edge detected from the invisible image.
As illustrated in
In this way, the feature amount selecting/combining unit 202 sets, in edge selection processing (selection processing related to the edge), the edge detected from the invisible image in a case where the edge is detected normally from the invisible image and sets the edge detected from the visible image in a case where the edge is not detected normally from the invisible image. In this case, with the invisible image, the edge is more likely to be detected and the detection accuracy is improved.
Next, a description is given of a case where the edge is not detected normally in both the visible image and the invisible image.
As described above, the feature amount selecting/combining unit 202 performs the OR processing on the edge detected from the invisible image and the edge detected from the visible image in a case where neither the edge of the invisible image nor the edge of the visible image is not detected normally. How the edge appears may differ between the visible image and the invisible image due to a shadow of the document. Accordingly, the OR processing is performed when the edge is failed to be detected normally from each image.
As described above, according to the present embodiment, with respect to the visible image or the invisible image, the feature amount of the document 12, which is the subject, and the background portion 13 is detected from at least one of the visible image and the invisible image, and the selection processing or the combination processing related the feature amount detected from each of the images is performed. As a result, the feature amount is automatically selected from one of the visible image and the invisible image, or the feature amount of the visible image and the feature amount of the invisible image are combined.
A description is now given of a third embodiment.
The third embodiment is different from the first embodiment and the second embodiment in including an image correction unit that corrects an image of a subject. In the following description of the third embodiment, the description of the same parts as in the first and second embodiments will be omitted, and those different from the first and second embodiments will be described. In addition, in the description of the third embodiment, the elements, functions, processes, and steps that are the same or substantially the same as those described in the first and second embodiments are denoted by the same reference numerals or step numbers, and redundant descriptions thereof are omitted below.
As illustrated in
As illustrated in
As illustrated in
Regarding the correction of the inclination of the document 12, which is the subject, the image correction unit 203 uses a method in which an inclination is obtained after the edge point groups, each of which is extracted from a side of the document 12, which is the subject, are regressed with a straight line, as described above, and then the entire image is rotated based on the obtained inclination.
Regarding the correction of a position of the document 12, which is the subject, the image correction unit 203 uses a method in which a position (point) of an intersection of the regression lines of the edge point groups of the upper side and the left side of the document 12, which is the subject, is obtained, and then the point, which is the intersection, is moved to the origin.
As described above, by correcting the inclination and the position of the document 12, which is the subject, based on a result of the edge detection, the document 12, which is the subject, is corrected to be easily viewable. In addition, there is a possibility that the OCR accuracy and the like improve.
In this case, for example, the image correction unit 203 may perform processing such as, if a rightmost edge point is identified, a region in the right side of the edge point is outside of the document 12, which is the subject, so that the region is deleted from the image.
The image correction unit 203 deletes an unnecessary region of the background portion 13 by cutting out the region of the document 12, which is the subject. As a result, in the image forming apparatus 100 such as a multifunction peripheral, there are effects obtained, for example, work performed by a user to input a size of the document 12, which is the subject, is reduced, the appearance of the image is improved, the storage area of the image storage destination is saved, and a size of the recording paper is reduced and the consumption of ink and toner is reduced in copying the image.
As described above, by automatically detecting and cutting out the size of the document 12, which is the subject, from the image based on the edge, the work of the user to input the document size is reduced (in particular, in a case of an irregular size). In addition, effects such as improving the appearance of the image, saving the storage area of the image storage destination, and reducing the size of the recording paper and reducing the consumption of ink and toner in copying the image are obtained.
As described above, according to the present embodiment, the image correction unit 203 corrects at least one of the visible image and the invisible image, resulting in improving the visibility of the image.
Note that in the embodiments described above, the image processing apparatus is applied to an MFP having at least two of copying, printing, scanning, and facsimile functions. Alternatively, the image processing apparatus may be applied to any image forming apparatus such as for example, a copier, a printer, a scanner, or a facsimile machine.
In each of the above embodiments, the image reading device 101 of the image forming apparatus 100 is applied as the image processing apparatus, but the present disclosure is not limited to this. The definition of the image processing apparatus is any apparatus or device that is able to acquire a reading level without reading as an image, such as a line sensor of the same magnification optical system (contact optical system: contact image sensor (CIS) type) illustrated in
Further, the image processing apparatus is applicable to a bill conveying device as illustrated in
A subject of the bill conveying device illustrated in
A subject of the white line detection device of the automatic guided vehicle illustrated in
According to a conventional art, although an infrared light low reflection portion is provided as a background, and edge detection of detecting an edge between a document and a background is performed based on an acquired infrared image, there is an issue that the edge detection is not successfully performed depending on a color of the document.
In view of the above-described issue, an object of one or more embodiments of the present disclosure is to achieve stable edge detection of detecting an edge between a document and a background regardless of a type of the document.
According to one or more embodiments of the present disclosure, there is an effect that stable edge detection of detecting an edge between a document and a background is able to be performed regardless of a type of the document.
Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above.
Each of the functions of the described embodiments may be implemented by one or more processing circuits or circuitry. Here, the “processing circuit or circuitry” in the present disclosure includes a programmed processor to execute each function by software, such as a processor implemented by an electronic circuit, and devices, such as an application specific integrated circuit (ASIC), a digital signal processors (DSP), a field programmable gate array (FPGA), and conventional circuit modules arranged to perform the recited functions.
Although the embodiments of the disclosure have been described and illustrated above, such description is not intended to limit the disclosure to the illustrated embodiments. Numerous additional modifications and variations are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the embodiments may be practiced otherwise than as specifically described herein. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2020-038264 | Mar 2020 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
20060072171 | Nystrom | Apr 2006 | A1 |
20070188638 | Nakazawa et al. | Aug 2007 | A1 |
20080252787 | Nakazawa et al. | Oct 2008 | A1 |
20100027061 | Nakazawa | Feb 2010 | A1 |
20100171998 | Nakazawa | Jul 2010 | A1 |
20110026083 | Nakazawa | Feb 2011 | A1 |
20110051201 | Hashimoto et al. | Mar 2011 | A1 |
20110063488 | Nakazawa | Mar 2011 | A1 |
20110249069 | Oyama | Oct 2011 | A1 |
20120224205 | Nakazawa | Sep 2012 | A1 |
20120236373 | Oyama | Sep 2012 | A1 |
20130063792 | Nakazawa | Mar 2013 | A1 |
20140029065 | Nakazawa | Jan 2014 | A1 |
20140204427 | Nakazawa | Jul 2014 | A1 |
20140204432 | Hashimoto et al. | Jul 2014 | A1 |
20140211273 | Konno et al. | Jul 2014 | A1 |
20140368893 | Nakazawa et al. | Dec 2014 | A1 |
20150098117 | Marumoto et al. | Apr 2015 | A1 |
20150116794 | Nakazawa | Apr 2015 | A1 |
20150163378 | Konno et al. | Jun 2015 | A1 |
20150222790 | Asaba et al. | Aug 2015 | A1 |
20150249762 | Ishida et al. | Sep 2015 | A1 |
20150304517 | Nakazawa et al. | Oct 2015 | A1 |
20160003673 | Hashimoto et al. | Jan 2016 | A1 |
20160006961 | Asaba et al. | Jan 2016 | A1 |
20160028920 | Hashimoto | Jan 2016 | A1 |
20160088179 | Nakazawa et al. | Mar 2016 | A1 |
20160112660 | Nakazawa et al. | Apr 2016 | A1 |
20160119495 | Konno et al. | Apr 2016 | A1 |
20160173719 | Hashimoto et al. | Jun 2016 | A1 |
20160221778 | Ueda | Aug 2016 | A1 |
20160268330 | Nakazawa et al. | Sep 2016 | A1 |
20160295138 | Asaba et al. | Oct 2016 | A1 |
20160373604 | Hashimoto et al. | Dec 2016 | A1 |
20170019567 | Konno et al. | Jan 2017 | A1 |
20170163836 | Nakazawa | Jun 2017 | A1 |
20170170225 | Asaba et al. | Jun 2017 | A1 |
20170201700 | Hashimoto et al. | Jul 2017 | A1 |
20170244853 | Yabuuchi et al. | Aug 2017 | A1 |
20170264782 | Hashimoto | Sep 2017 | A1 |
20170295298 | Ozaki et al. | Oct 2017 | A1 |
20170302821 | Sasa et al. | Oct 2017 | A1 |
20170324883 | Konno et al. | Nov 2017 | A1 |
20180139345 | Goh et al. | May 2018 | A1 |
20180146150 | Shirado et al. | May 2018 | A1 |
20180175096 | Inoue et al. | Jun 2018 | A1 |
20180213124 | Takuhei et al. | Jul 2018 | A1 |
20180261642 | Asaba et al. | Sep 2018 | A1 |
20190163112 | Nikaku et al. | May 2019 | A1 |
20190208149 | Asaba et al. | Jul 2019 | A1 |
20190238702 | Ikemoto et al. | Aug 2019 | A1 |
20190268496 | Nakazawa et al. | Aug 2019 | A1 |
20190289163 | Hashimoto et al. | Sep 2019 | A1 |
20190327387 | Hashimoto et al. | Oct 2019 | A1 |
20190335061 | Nakazawa et al. | Oct 2019 | A1 |
20200053229 | Hashimoto et al. | Feb 2020 | A1 |
20200053230 | Nakazawa et al. | Feb 2020 | A1 |
20200053233 | Nakazawa et al. | Feb 2020 | A1 |
20200077010 | Noguchi | Mar 2020 | A1 |
20200120224 | Sasa et al. | Apr 2020 | A1 |
20200120225 | Oyama et al. | Apr 2020 | A1 |
20200120228 | Ozaki et al. | Apr 2020 | A1 |
20200137262 | Kubo et al. | Apr 2020 | A1 |
20200244837 | Tsukahara et al. | Jul 2020 | A1 |
20200252513 | Nakada et al. | Aug 2020 | A1 |
20200296255 | Hashimoto et al. | Sep 2020 | A1 |
20200336615 | Ono et al. | Oct 2020 | A1 |
20200410271 | Nakazawa et al. | Dec 2020 | A1 |
20200412904 | Ohmiya et al. | Dec 2020 | A1 |
Number | Date | Country |
---|---|---|
2 706 511 | Mar 2014 | EP |
3 609 171 | Feb 2020 | EP |
2014-053739 | Mar 2014 | JP |
2016018395 | Feb 2016 | WO |
Entry |
---|
U.S. Appl. No. 16/911,406, filed Jun. 25, 2020, Ayumu Hashimoto, et al. |
U.S. Appl. No. 16/925,39, filed Jul. 10, 2020, Yutaka Ohmiya, et al. |
Extended European Search Report dated Jul. 19, 2021 in European Patent Application No. 21159319.9, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20210281712 A1 | Sep 2021 | US |