This application is based upon and claims the benefit of priority from the corresponding Japanese Patent Application No. 2013-091371 filed on Apr. 24, 2013, the entire contents of which are incorporated herein by reference.
The present disclosure relates to an image processing apparatus and an image forming apparatus that can perform image processing on an edit area arbitrarily specified in a document image.
Image processing apparatuses such as scanners, copying machines, and multifunction peripherals generally have an image processing function to perform image processing such as magnification of an image read from a document. In particular, a technique is known by which an edit area is defined based on a handwritten image included in an image read from a document, and an image in the edit area is magnified.
An image processing apparatus according to an aspect of the present disclosure includes a retrieval control portion, an area defining portion, a determination portion, an area enlarging portion, and an image processing portion. The retrieval control portion retrieves a document image stored in a storage portion. The area defining portion defines an edit area based on an area specifying image included in a read image read from a document by an image reading portion or on an area specifying image inputted through a drawing operation on the document image displayed on a display input portion. The area defining portion defines one or more edit areas. The determination portion determines whether or not an element image is present on a profile line of the edit area in the document image retrieved by the retrieval control portion. The area enlarging portion enlarges the edit area determined by the determination portion as having the element image on the profile line to a range in which the element image is no longer present on the profile line. The image processing portion performs predetermined image processing on an image in the edit area enlarged by the area enlarging portion out of the document image.
An image forming apparatus according to another aspect of the present disclosure includes a retrieval control portion, an area defining portion, a determination portion, an area enlarging portion, an image processing portion, and an image forming portion. The image forming portion forms an image on a sheet based on an image after image processing by the image processing portion.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description with reference where appropriate to the accompanying drawings. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
[Schematic Configuration of Multifunction Peripheral 10]
First, a schematic configuration of the multifunction peripheral 10 according to an embodiment of the present disclosure will be described with reference to
As shown in
As shown in
The image reading portion 2 includes a document table 21, a light source unit 22, mirrors 23 and 24, an optical lens 25, a CCD (Charge Coupled Device) 26, and so on. The document table 21 is a portion which is provided on an upper surface of the image reading portion 2 and on which the document P is placed. The light source unit 22 includes an LED light source 221 and a mirror 222, and can be moved in a secondary scanning direction 71 by a motor, not shown. The LED light source 221 includes a plurality of white LEDs arranged along a primary scanning direction 72. The mirror 222 reflects, toward the mirror 23, light emitted by the LED light source 221 and reflected from a surface of the document P in the reading position 20 on the document table 21. The light reflected from the mirror 222 is then guided to the optical lens 25 by the mirrors 23 and 24. The optical lens 25 converges the light incident thereon and causes the converged light to enter the CCD 26. The CCD 26 has a photoelectric conversion element or the like that inputs in the control portion 5 electrical signals according to the amount of the light that has entered from the optical lens 25 as image data of the document P.
The image forming portion 3 is an electrophotographic image forming portion that performs image forming processing (printing processing) based on the image data read by the image reading portion 2 or image data inputted from an external information processing apparatus such as a personal computer. Specifically, as shown in
First, the photosensitive drum 31 is uniformly charged at a predetermined potential by the charging device 32. Next, light based on the image data is applied to a surface of the photosensitive drum 31 by the exposure device 33. Thereby, an electrostatic latent image corresponding to the image data is formed on the surface of the photosensitive drum 31. The electrostatic latent image on the photosensitive drum 31 is then developed (made visible) as a toner image by the developing device 34. A toner (developer) is supplied to the developing device 34 from a toner container 34A that is attachable to and detachable from the image forming portion 3. Subsequently, the toner image formed on the photosensitive drum 31 is transferred to a paper sheet by the transfer roller 35. Thereafter, the toner image transferred on the paper sheet is heated, melted, and fixed by the fixing roller 37 while the paper sheet is passing between the fixing roller 37 and the pressurizing roller 38. The potential on the photosensitive drum 31 is removed by the destaticizing device 36.
The control portion 5 has control instruments such as a CPU, a ROM, a RAM, and an EEPROM. The CPU is a processor that performs various types of arithmetic processing. The ROM is a nonvolatile storage portion in which information such as control programs to cause the CPU to perform various types of processing is prestored. The RAM is a volatile storage portion, and the EEPROM is a nonvolatile storage portion. The RAM and the EEPROM are used as temporary storage memories (work spaces) for various types of processing to be performed by the CPU.
The control portion 5 performs overall control of the multifunction peripheral 10 by executing the various control programs prestored in the ROM by means of the CPU. The control portion 5 may be formed of an electronic circuit such as integrated circuits (ASIC and DSP). The control portion 5 may be a control portion provided separately from a main control portion that provides overall control of the multifunction peripheral 10.
Furthermore, in the ROM or the EEPROM of the control portion 5, an image editing program to cause the CPU of the control portion 5 to perform image editing processing described later (see a flowchart in
The storage portion 6 is a nonvolatile storage portion such as a hard disk or an SSD in which image data read by the image reading portion 2 or image data inputted from an external information processing apparatus such as a personal computer is stored. The storage portion 6 may be provided outside the multifunction peripheral 10 as long as the control portion 5 can retrieve the image data from the storage portion 6.
[Image Editing Processing]
Hereinafter, an example of a set of procedures of the image editing processing to be performed by the control portion 5 will be described with reference to
The present embodiment is on the assumption that image data of a document image F1 shown in
<Step S1>
First, in step S1, the control portion 5 causes the operation display portion 7 to display a selection screen on which a document image to be subjected to the present editing is selected out of one or more document images stored in the storage portion 6.
<Step S2>
In step S2, the control portion 5 waits for the selection of the document image on the selection screen displayed in step S1 (No in S2). The control portion 5 shifts the processing to step S3 once the document image has been selected (Yes in S2). Here, the document image F1 (see
<Step S3>
Next, in step S3, the control portion 5 waits for an operation requesting initiation of the image reading processing on the operation display portion 7 (No in S3). The control portion 5 shifts the processing to step S4 once the operation requesting initiation of the image reading processing has been performed (Yes in S3).
<Step S4>
In step S4, the control portion 5 performs the image reading processing in which the image reading portion 2 reads an image from a document set on the ADF 1 or on the document table 21. Hereinafter, the image read in step S4 is referred to as read image. Here, the image reading portion 2 reads the read image F2 shown in
<Step S5>
In step S5, the control portion 5 retrieves the document image from the storage portion 6, compares the read image with the document image, and extracts one or more difference images as one or more area specifying images. The control portion 5 retrieving the document image is an example of a retrieval control portion. Here, the area specifying image F3 shown in
<Step S6>
In step S6, the control portion 5 defines one or more edit areas based on the one or more area specifying images extracted in step S5. Here, the control portion 5 performing such processing is an example of an area defining portion.
Specifically, the control portion 5 defines, as each of the one or more edit areas, a rectangular area framed by opposite end positions of each of the one or more area specifying images in a horizontal direction (left-right direction in
Here, as shown in
The shape of the edit area is not limited to a rectangle. For example, when the area specifying image has a shape similar to a polygon such as a triangle or a circle (including an oval), the control portion 5 may define an area of a polygon such as a triangle or a circle that circumscribes the area specifying image as the edit area. Alternatively, the area specifying image may be defined as the edit area as is. When the area specifying image has a circular shape, for example, enlargement of the edit area described later is performed concentrically, for example.
<Step S7>
Subsequently, in step S7, the control portion 5 calculates the sum of pixel values on the profile line of the edit area in the document image based on the document image and position coordinates of the edit area. When there are a plurality of edit areas, the processing in step S7 is performed for each of the edit areas.
Specifically, the control portion 5 calculates the sum of pixel values for each side forming the profile line of the edit area. The pixel values represent the density of each pixel in the document image. For example, a pixel value “0” represents “white” and a pixel value “255” represents “black”. In another embodiment, the control portion 5 may calculate the average of pixel values for each side forming the profile line of the edit area in the document image. Alternatively, the control portion 5 may calculate the sum or the average of pixel values on the entire profile line of the edit area.
Incidentally, when an element image such as a drawing or a letter (letters) is present on the profile line of the handwritten image included in the read image, the edit area defined based on the handwritten image will include the element image such as a drawing or a letter (letters) broken by the profile line. Against the problem, the procedure described below is performed in the multifunction peripheral 10 according to the present embodiment to define, as the edit area, an area in which the element image such as a drawing or a letter (letters) is not broken.
<Step S8>
Next, in step S8, the control portion 5 determines whether or not an element image such as a drawing or a letter (letters) is present on the profile line of the edit area in the document image. Here, the control portion 5 performing such processing is an example of a determination portion. Specifically, the control portion 5 determines for each side of the profile line of the edit area whether or not the sum of pixel values calculated in step S7 is not less than a predetermined threshold. The threshold is a value preliminarily determined as an index for determining whether or not an element image such as a drawing or a letter (letters) is present on the profile line of the edit area.
Thus, the control portion 5 can determine whether or not an element image such as a drawing or a letter (letters) is present on the profile line of the edit area according to a result of a comparison of the sum of the pixel values with the threshold. When there are a plurality of edit areas, the processing in step S8 is performed for each of the edit areas. When a pixel value “0” represents “black” and a pixel value “255” represents “white”, it is determined in step S8 whether or not the sum of the pixel values is not greater than a threshold preliminarily determined as an index for determining whether or not the element image is present on the profile line of the edit area.
The control portion 5 shifts the processing to step S9 once it determines that the sum of the pixel values of at least one side of the profile line of the edit area is not less than the threshold and therefore the element image is present on the profile line of the edit area (Yes in S8). Here, as shown by a dotted line in
<Step S9>
In step S9, the control portion 5 enlarges each of the one or more edit areas determined in step S8 as having the element image on its profile line. Specifically, the control portion 5 shifts the side of the profile line of the edit area determined in step S8 as having a sum of pixel values not less than the threshold by predetermined pixels in a direction for the edit area to be enlarged. The method for the enlargement of the edit area is not limited thereto. In another embodiment, the edit area as a whole may be enlarged with the aspect ratio (ratio of length and breadth) of its own maintained.
<Step S10>
Subsequently, in step S10, the control portion 5 determines whether or not the size of the edit area enlarged in step S9 has reached a predetermined maximum size. Specifically, the control portion 5 determines that the size of the edit area has reached the maximum size when a vertical dimension of the edit area has reached a vertical dimension of the document image or a length shorter than the vertical dimension of the document image by a predetermined amount. The control portion 5 also determines that the size of the edit area has reached the maximum size when a horizontal dimension of the edit area has reached a horizontal dimension of the document image or a length shorter than the horizontal dimension of the document image by a predetermined amount. It should be noted that the maximum size is not limited thereto, and the control portion 5 may determine that the size of the edit area has reached the maximum size when the edit area has been enlarged from an initial size of the edit area at a predetermined magnification or a higher magnification. Thus, the edit area is prevented from being enlarged more than necessary.
The control portion 5 shifts the processing to step S11 once it determines that the size of the edit area has reached the maximum size (Yes in S10). On the other hand, the control portion 5 shifts the processing to step S7 once it determines that the size of the edit area has not reached the maximum size (No in S10). Thus, in steps S7 to S10, the control portion 5 enlarges each of the one or more edit areas determined in step S8 as having the element image on its profile line to a range in which the element image is no longer present on the profile line. However, the enlargement of the edit area is performed within a range of the predetermined maximum size. Here, the control portion 5 performing such processing is an example of an area enlarging portion.
<Step S11>
The processing is shifted to step S11 when it is determined that the element image is not present on the profile line of the edit area or when it is determined that the edit area has reached the maximum size. Subsequently, in step S11, the control portion 5 performs predetermined image processing on an image in the edit area enlarged in steps S7 to S10 out of the document image. Here, the control portion 5 performing such processing is an example of an image processing portion. Specifically, the control portion 5 extracts the image in the edit area out of the document image and performs magnifying processing to magnify the image at a predetermined magnification. When it is determined that the edit area has reached the maximum size, the control portion 5 performs, in step S11, the magnifying processing on the image in the edit area after the final enlargement or on the image in the initial edit area, for example.
<Step S12>
Subsequently, in step S12, the control portion 5 outputs the image in the edit area after the image processing in step S11. Specifically, the control portion 5 can output data of the image in the edit area to the image forming portion 3 and print the image on a sheet. In addition, the control portion 5 is capable of storing the image in the edit area after the image processing in step S11 in the storage portion 6 as data of an image different from the document image and the read image or transmitting the image to an information processing apparatus such as a personal computer.
As described above, when an element image such as a drawing or a letter (letters) is present on the profile line of the edit area defined based on the area specifying image in the multifunction peripheral 10, the edit area is enlarged to a size in which the profile line of the edit area no longer overlaps with the element image. Accordingly, the multifunction peripheral 10 can provide an output image in which an element image such as a drawing or a letter (letters) is not broken.
The image processing to be performed in step S11 is not limited to the magnifying processing. Other examples of the image processing may include cropping processing to crop the image in the edit area and output the image as cropped, minifying processing to minify the image in the edit area and output the image, and color changing processing to change the color of the image in the edit area.
In the embodiments given above, the case has been described as an example where the control portion 5 selects the document image stored in the storage portion 6 according to an operation on the operation display portion 7 by a user. In another embodiment, the control portion 5 may automatically select the document image corresponding to the read image read by the image reading portion 2 out of the image data stored in the storage portion 6 based on the read image. For example, a document to be read by the image reading portion 2 may include identification information such as a digital watermark or a bar code showing the correspondence with the document image, and the control portion 5 may specify the document image based on the identification information. In this case, furthermore, the control portion 5 may have a function of adding the identification information to the image data stored in the storage portion 6 and printing the same. In the case where the identification information is included in the document, the control portion 5 compares the read image with the document image after eliminating the identification information from the read image.
In the embodiments given above, the configuration has been described in which a difference image of the document image stored in the storage portion 6 and the read image read by the image reading portion 2 is extracted as the area specifying image, and the edit area is defined based on the area specifying image. Hereinafter, another configuration will be described in which the control portion 5 in the multifunction peripheral 10 defines the edit area based on an area specifying image given through a drawing operation on the operation display portion 7 by a user.
Hereinafter, another example of the set of procedures of the image editing processing to be performed by the control portion 5 will be described with reference to
<Step S21>
First, once the document image has been selected in step S2 (Yes in S2), the control portion 5 causes the operation display portion 7 to display the document image in following step S21. Thereby, a user is allowed to input an area specifying image through a drawing operation with a finger or a stylus on the document image displayed on the operation display portion 7. Once the area specifying image has been inputted through the drawing operation, the control portion 5 causes the operation display portion 7 to display the area specifying image.
<Step S22>
In step S22, the control portion 5 waits for the operation of drawing the area specifying image on the operation display portion 7 (No in S22). For example, the control portion 5 determines that the drawing operation has been performed when the operation of drawing the area specifying image is performed on the operation display portion 7 or when an operation of confirming the drawing operation is entered after the drawing operation. The control portion 5 shifts the processing to step S23 once it determines that the operation of drawing the area specifying image has been performed on the operation display portion 7 (Yes in S22).
<Step S23>
In step S23, the control portion 5 acquires, from the operation display portion 7, position coordinates of the area specifying image which has been inputted through the drawing operation on the document image displayed on the operation display portion 7 and which has been displayed on the operation display portion 7.
<Step S24>
In step S24, the control portion 5 defines one or more edit areas based on one or more area specifying images acquired in step S23. Here, the control portion 5 performing such processing is an example of an area defining portion. Specifically, the control portion 5 defines, as each of the one or more edit areas, an area framed by opposite ends in the horizontal direction (left-right direction) and opposite ends in the vertical direction (up-down direction) of each of the one or more area specifying images in the document image displayed on the operation display portion 7. That is, the control portion 5 defines an area of a rectangle circumscribing each of the one or more area specifying images as each of the one or more edit areas.
In step S7 and the following steps, the control portion 5 enlarges each of the one or more edit areas defined in step 24 to a position where the element image is no longer present on the profile line of each of the one or more edit areas, and magnifies and outputs an image in each of the one or more edit areas (S7 to S12). Such a configuration allows the multifunction peripheral 10 to perform image processing on the one or more edit areas by inputting the one or more area specifying images through the drawing operation on the operation display portion 7 without printing and outputting the document image.
It is to be understood that the embodiments herein are illustrative and not restrictive, since the scope of the invention is defined by the appended claims rather than by the description preceding them, and all changes that fall within metes and bounds of the claims, or equivalence of such metes and bounds thereof are therefore intended to be embraced by the claims.
Number | Date | Country | Kind |
---|---|---|---|
2013-091371 | Apr 2013 | JP | national |