The present invention relates to an image reading apparatus such as an image scanner that reads an image and an image forming apparatus having a copying function for forming the image read by the image reading apparatus on an image forming medium.
There is an image reading apparatus including sensors having different resolutions. The image reading apparatus reads an image of an original document as plural image data having different resolutions. In general, when a color sensor and a monochrome (luminance) sensor are compared, the monochrome (luminance) sensor has higher sensitivity. This is because, whereas the color sensor detects light through an optical filter that transmits only light in a wavelength range corresponding to a desired color, the monochrome (luminance) sensor detects light in a wavelength range wider than that of the color sensor. Therefore, the monochrome (luminance) sensor obtains a signal of a level equivalent to that of the color sensor even if a physical size thereof is smaller than that of the color sensor. In an image reading apparatus including both the color sensor and the monochrome (luminance) sensor, the resolution of the monochrome (luminance) sensor is higher than the resolution of the color sensor because of the difference in sensitivity of the sensors explained above.
As image processing used in the image reading apparatus including the sensors having different resolutions, there is processing for increasing the resolution of image data having low resolution using image data having high resolution. For example, JP-A-2007-73046 discloses a method of increasing the resolution of color image data. However, in the technology disclosed in JP-A-2007-73046, when the resolution of color signals is increased, the color signals change in a fixed direction and chroma falls.
It is an object of an aspect of the prevent invention to provide an image reading apparatus and an image forming apparatus that improve a quality of second image data read by a second sensor using first image data read by a first sensor.
According to an aspect of the present invention, there is provided an image reading apparatus including: a first photoelectric conversion unit that converts an image of an original document into an electric signal at first resolution; a second photoelectric conversion unit that converts the image of the original document into an electric signal at second resolution higher than the first resolution; and an image-quality improving unit that is input with first image data obtained by reading the image of the original document at the first resolution with the first photoelectric conversion unit and second image data obtained by reading the image of the original document at the second resolution with the second photoelectric conversion unit, outputs, if a correlation between the first image data and the second image data is positive correlation, third image data obtained by converting the first resolution of the first image data into the second resolution and having positive correlation as a correlation with the first image data, and outputs, if the correlation between the first image data and the second image data is negative correlation, third image data obtained by converting the first resolution of the first image data into the second resolution and having negative correlation as a correlation with the first image data.
According to another aspect of the present invention, there is provided an image reading apparatus including: a first photoelectric conversion unit that has sensitivity to a first wavelength range; a second photoelectric conversion unit that has sensitivity to a wavelength range including the first wavelength range and wider than the first wavelength range; and an image-quality improving unit that is input with first image data obtained by reading an image of an original document with the first photoelectric conversion unit and second image data obtained by reading the image of the original document with the second photoelectric conversion unit, outputs, if a correlation between the first image data and the second image data is positive correlation, third image data having positive correlation as a correlation with the first image data, and outputs, if the correlation between the first image data and the second image data is negative correlation, third image data having negative correlation as a correlation with the first image data.
According to still another aspect of the present invention, there is provided an image forming apparatus including: a first photoelectric conversion unit that converts an image of an original document into an electric signal at first resolution; a second photoelectric conversion unit that converts the image of the original document into an electric signal at second resolution higher than the first resolution; an image-quality improving unit that is input with first image data obtained by reading the image of the original document at the first resolution with the first photoelectric conversion unit and second image data obtained by reading the image of the original document at the second resolution with the second photoelectric conversion unit, outputs, if a correlation between the first image data and the second image data is positive correlation, third image data obtained by converting the first resolution of the first image data into the second resolution and having positive correlation as a correlation with the first image data, and outputs, if the correlation between the first image data and the second image data is negative correlation, third image data obtained by converting the first resolution of the first image data into the second resolution and having negative correlation as a correlation with the first image data; and an image forming unit that forms third image data generated by the image-quality improving unit on an image forming medium.
Additional objects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out hereinafter.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention, and together with the general description given above and the detailed description of the embodiments given below, serve to explain the principles of the invention.
An embodiment of the present invention is explained below in detail with reference to the accompanying drawings.
The digital multi function peripheral 1 shown in
The digital multi function peripheral 1 includes various external interfaces for inputting and outputting image data. For example, the digital multi function peripheral 1 includes a facsimile interface for transmitting and receiving facsimile data and a network interface for performing network communication. With such a configuration, the digital multi function peripheral 1 functions as a copy machine, a scanner, a printer, a facsimile, and a network communication machine.
A configuration of the image reading unit 2 is explained.
The image reading unit 2 includes, as shown in
The ADF 4 is provided above the image reading unit 2. The ADF 4 includes the document placing unit that hold plural original documents. The ADF 4 conveys the original documents set in the original placing unit one by one. The ADF 4 conveys the original document at fixed conveying speed to allow the image reading unit 2 to read an image formed on the surface of the original document.
The document table glass 10 is glass that holds an original document. Reflected light from the surface of the original document held on the document table glass 10 is transmitted through the glass. The ADF 4 covers the entire document table glass 10. The ADF 4 closely attaches the original document on the document table glass 10 to a glass surface and fixes the original document. The ADF 4 also functions as a background for the original document on the document table glass 10.
The light source 11 exposes the surface of the original document placed on the document table glass 10. The light source 11 is, for example, a fluorescent lamp, a xenon lamp, or a halogen lamp. The reflector 12 is a member that adjusts a distribution of light from the light source 11. The first mirror 13 leads light from the surface of the original document to the second mirror 16. The first carriage 14 is mounted with the light source 11, the reflector 12, and the first mirror 13. The first carriage 14 moves at speed (V) in a sub-scanning direction with respect to the surface of the original document on the document table glass 10 with driving force given from a not-shown driving unit.
The second mirror 16 and the third mirror 17 lead the light from the first mirror 13 to the condensing lens 20. The second carriage 18 is mounted with the second mirror 16 and the third mirror 17. The second carriage 18 moves in the sub-scanning direction at half speed (V/2) of the speed (V) of the first carriage 14. In order to keep a distance from a reading position on the surface of the original document to a light receiving surface of the photoelectric conversion unit 21 at fixed optical path length, the second carriage 18 follows the first carriage 14 at half speed of the speed of the first carriage.
The light from the surface of the original document is made incident on the condensing lens 20 via the first, second, and third mirrors 13, 16, and 17. The condensing lens 20 leads the incident light to the photoelectric conversion unit 21 that converts the light into an electric signal. The reflected light from the surface of the original document is transmitted through the glass of the document table glass 10, sequentially reflected by the first mirror 13, the second mirror 16, and the third mirror 17, and focused on the light receiving surface of the photoelectric conversion unit 21 via the condensing lens 20.
The photoelectric conversion unit 21 includes plural line sensors. The line sensors of the photoelectric conversion unit 21 have a configuration in which plural photoelectric conversion elements that convert light into an electric signal are arranged in a main scanning direction. The line sensors are arranged side by side in parallel such that the line sensors are arranged at specified intervals in the sub-scanning direction.
In this embodiment, the photoelectric conversion unit 21 includes four line CCD sensors. As explained later, the four line CCD sensors as the photoelectric conversion unit 21 include one monochrome line sensor 61K and three color line sensors 61R, 61G, and 61B. The monochrome line sensor 61K reads black image data. The three color line sensors 61R, 61G, and 61B read color image data of three colors, respectively. When a color image is read with three colors of R (red), G (green), and B (blue), color line sensors include the red line sensor 61R that reads a red image, the green line sensor 61G that reads a green image, and the blue line sensor 61B that reads a blue image.
The CCD board 22 is mounted with a sensor driving circuit (not shown in the figure) for driving the photoelectric conversion unit 21. The CCD control board 23 controls the CCD board 22 and the photoelectric conversion unit 21. The CCD control board 23 includes a control circuit (not shown in the figure) that controls the CCD board 22 and the photoelectric conversion unit 21 and an image processing circuit (not shown in the figure) that processes an image signal from the photoelectric conversion unit 21.
A configuration of the image forming unit 3 is explained.
As shown in
The exposing device 40 forms latent images on the first to fourth photoconductive drums 41a to 41d. The exposing device 40 irradiates exposure light corresponding to image data on the photoconductive drums 41a to 41d functioning as image bearing members for the respective colors. The first to fourth photoconductive drums 41a to 41d carry electrostatic latent images. The photoconductive drums 41a to 41d form electrostatic latent images corresponding to the intensity of the exposure light irradiated from the exposing device 40.
The first to fourth developing devices 42a to 42d develop the latent images carried by the photoconductive drums 41a to 41d with the respective colors. Specifically, the developing devices 42a to 42d supply toners of the respective colors to the latent images carried by the photoconductive drums 41a to 41d corresponding thereto to thereby develop the images. For example, the image forming unit is configured to obtain a color image according to subtractive color mixture of the three colors, cyan, magenta, and yellow. In this case, the first to fourth developing devices 42a to 42d visualize (develop) the latent images carried by the photoconductive drums 41a to 41d with any ones of the colors, yellow, magenta, cyan, and black. The first to fourth developing devices 42a to 42d store toners of any ones of the colors, yellow, magenta, cyan, and black, respectively. The toners of the colors stored in the respective first to fourth developing devices 42a to 42d (order for developing of images of the respective colors) are determined according to an image forming process or characteristics of the toners.
The transfer belt 43 functions as an intermediate transfer member. Toner images of the colors formed on the photoconductive drums 41a to 41d are transferred onto the transfer belt 43 functioning as the intermediate transfer member in order. The photoconductive drums 41a to 41d transfer, in an intermediate transfer position, the toner images on drum surfaces thereof onto the transfer belt 43 with intermediate transfer voltage. The transfer belt 43 carries a color toner image formed by superimposing the images of the four colors (yellow, magenta, cyan, and black) transferred by the photoconductive drums 41a to 41d. The transfer device 45 transfers the toner image formed on the transfer belt 43 onto a sheet serving as an image forming medium.
The sheet feeding unit 30 feeds the sheet, on which the toner image is transferred, from the transfer belt 43 functioning as the intermediate transfer member to the transfer device 45. The sheet feeding unit 30 has a configuration for feeding the sheet to a position for transfer of the toner image by the transfer device 45 at appropriate timing. In the configuration example shown in
The plural cassettes 31 store sheets serving as image forming media, respectively. The cassettes 31 store sheets of arbitrary sizes. Each of the pickup rollers 33 takes out the sheets from the cassette 31 one by one. Each of the separating mechanism 35 prevents the pickup roller 33 from taking out two or more sheets from the cassette at a time (separates the sheets one by one). The conveying rollers 37 convey the one sheet separated by the separating mechanism 35 to the aligning rollers 39. The aligning rollers 39 send, at timing when the transfer device 45 transfers the toner image from the transfer belt 43 (the toner image moves (in the transfer position)), the sheet to a transfer position where the transfer device 45 and the transfer belt 43 are set in contact with each other.
The fixing device 46 fixes the toner image on the sheet. For example, the fixing device 46 fixes the toner image on the sheet by heating the sheet in a pressed state. The fixing device 46 applies fixing processing to the sheet on which the toner image is transferred by the transfer device 45 and conveys the sheet subjected to the fixing processing to the stock unit 48. The stock unit 48 is a paper discharge unit to which a sheet subjected to image forming processing (having an image printed thereon) is discharged. The belt cleaner 47 cleans the transfer belt 43. The belt cleaner 47 removes a waste toner remaining on a transfer surface, onto which the toner image on the transfer belt 43 is transferred, from the transfer belt 43.
A configuration of a control system of the digital multi function peripheral 1 is explained.
As shown in
The main control unit 50 controls the entire digital multi function peripheral 1. Specifically, the main control unit 50 receives an operation instruction from the user in the operation unit 51 and controls the image reading unit 2, the image forming unit 3, and the external interface 52.
As explained above, the image reading unit 2 and the image forming unit 3 include the configurations for treating a color image. For example, when color copy processing is performed, the main control unit 50 converts a color image of an original document read by the image reading unit 2 into color image data for print and subjects the color image data to print processing with the image forming unit 3. As the image forming unit 3, a printer of an arbitrary image forming type can be applied. For example, the image forming unit 3 is not limited to the printer of the electrophotographic type explained above and may be a printer of an ink jet type or a printer of a thermal transfer type.
The operation unit 51 receives the input of an operation instruction from the user and displays guidance for the user. The operation unit 51 includes a display device and operation keys. For example, the operation unit 51 includes a liquid crystal display device incorporating a touch panel and hard keys such as a ten key.
The external interface 52 is an interface for performing communication with an external apparatus. The external interface 52 is an external device such as a facsimile communication unit (a facsimile unit) or a network interface.
A configuration in the main control unit 50 is explained.
As shown in
The CPU 53 manages the control of the entire digital multi function peripheral 1. The CPU 53 realizes various functions by executing, for example, a program stored in a not-shown program memory. The main memory 54 is a memory in which work data and the like are stored. The CPU 53 realizes various kinds of processing by executing various programs using the main memory 54. For example, the CPU 53 realizes copy control by controlling the scanner 2 and the printer 3 according to a program for copy control.
The HDD (hard disk drive) 55 is a nonvolatile large-capacity memory. For example, the HDD 55 stores image data. The HDD 55 also stores set values (default set values) in the various kinds of processing. For example, a quantization table explained later is stored in the HDD 55. The programs executed by the CPU 53 may be stored in the HDD 55.
The input-image processing unit 56 processes an input image. The input-image processing unit 56 processes input image data input from the scanner 2 and the like according to an operation mode of the digital multi function peripheral 1. The page memory 57 is a memory that stores image data to be processed. For example, the page memory 57 stores color image data for one page. The page memory 57 is controlled by a not-shown page memory control unit. The output-image processing unit 58 processes an output image. In the configuration example shown in
The photoelectric conversion unit 21 includes a light receiving unit 21a for receiving light. The photoelectric conversion unit 21 includes the four line sensors, i.e., the red line sensor 61R, the green line sensor 61G, the blue line sensor 61B, and the monochrome line sensor 61K. In each of the line sensors, photoelectric conversion elements (photodiodes) as light receiving elements are arranged in the main scanning direction for plural pixels. The line sensors 61R, 61G, 61B, and 61K are arranged in parallel to the light receiving unit 21a of the photoelectric conversion unit 21. The line sensors 61R, 61G, 61B, and 61K are arranged side by side in parallel such that the line sensors are arranged at specified intervals in the sub-scanning direction.
The red line sensor 61R converts red light into an electric signal. The red line sensor 61R is a line CCD sensor having sensitivity to light in a red wavelength range. The red line sensor 61R is a line CCD sensor in which an optical filter that transmits only the light in the red wavelength range is arranged.
The green line sensor 61G converts green light into an electric signal. The green line sensor 61G is a line CCD sensor having sensitivity to light in a green wavelength range. The green line sensor 61G is a line CCD sensor in which an optical filter that transmits only the light in the green wavelength range is arranged.
The blue line sensor 61B converts blue light into an electric signal. The blue line sensor 61B is a line CCD sensor having sensitivity to light in a blue wavelength range. The blue line sensor 61B is a line CCD sensor in which an optical filter that transmits only the light in the blue wavelength range is arranged.
The monochrome line sensor 61K converts lights of all the colors into electric signals. The monochrome line sensor 61K is a line CCD sensor having sensitivity to lights in a wide wavelength range including wavelength ranges of the colors. The monochrome line sensor 61K is a line CCD sensor in which an optical filter is not arranged or a line CCD sensor in which a transparent filter is arranged.
Pixel pitches and the numbers of pixels of the line sensors are explained.
The red line sensor 61R, the green line sensor 61G, and the blue line sensor 61B as the three line sensors for colors have the same pixel pitch and the same number of light receiving elements (photodiodes), i.e., the same number of pixels. For example, in the red line sensor 61R, the green line sensor 61G, and the blue line sensor 61B, photodiodes are arranged as light receiving elements at a pitch of 9.4 μm. In each of the red line sensor 61R, the green line sensor 61G, and the blue line sensor 61B, light receiving elements for 3750 pixels are arranged in an effective pixel area.
The monochrome line sensor 61K is different from the red line sensor 61R, the green line sensor 61G, and the blue line sensor 61B in a pixel pitch and the number of pixels. For example, in the monochrome line sensor 61K, photodiodes are arranged as light receiving elements at a pitch of 4.7 μm. In the monochrome line sensor 61K, light receiving elements for 7500 pixels are arranged in an effective pixel area. In this example, the pitch (a pixel pitch) of the light receiving elements in the monochrome line sensor 61K is half as large as the pitch (a pixel pitch) of the light receiving elements in the red line sensor 61R, the green line sensor 61G, and the blue line sensor 61B. The number of pixels in the effective pixel area of the monochrome line sensor 61K is twice as large as the number of pixels in the effective pixels areas of the color line sensors 61R, 61G, and 61B.
Such four line sensors 61R, 61G, 61B, and 61K are arranged side by side in parallel such that the line sensors are arranged at specified intervals in the sub-scanning direction. In the line sensors 61R, 61G, 61B, and 61K, pixel data to be read shifts in the sub-scanning direction by the specified intervals. When a color image is read, in order to correct the shift in the sub-scanning direction, image data read by the line sensors 61R, 61G, 61B, and 61K are stored by a line memory or the like.
Characteristics of the line sensors 61R, 61G, 61B, and 61K are explained.
As shown in
It is assumed that light from the light source 11 shown in
In the examples shown in
An internal configuration of the photoelectric conversion unit 21 is explained.
First, a flow of a signal from the line sensors 61R, 61G, 61B, and 61K in the configuration example shown in
As shown in
For example, the light receiving elements (the photodiodes) in the line sensors 61R, 61G, and 61B supply the generated charges corresponding to the pixels to the analog shift registers 63R, 63G, and 63B via the shift gates 62R, 62G, and 62B as a shift signal (SH-RGB). The analog shift registers 63R, 63G, and 63B serially output, in synchronization with transfer clocks CLK1 and CLK2, pieces of pixel information (OS-R, OS-G, and OS-B) as charges corresponding to the pixels supplied from the line sensors 61R, 61G, and 61B. The pieces of pixel information (OS-R, OS-G, and OS-B) output by the analog shift registers 63R, 63G, and 63B in synchronization with the transfer clocks CLK1 and CLK2 are signals indicating values of red (R), green (G), and blue (B) in the pixels, respectively.
The number of light receiving elements (e.g., 7500) of the monochrome line sensor 61K is twice as large as the number of light receiving elements (e.g., 3750) of the line sensors 61R, 61G, and 61B. One monochrome line sensor 61K is connected to the two shift gates 62KO and 62KE and the two analog shift registers 63KO and 63KE. The shift gate 62KO is connected to correspond to odd-number-th pixels (light receiving elements) in the line sensor 61K. The shift gate 62KE is connected to correspond to even-number-th pixels (light receiving elements) in the line sensor 61K.
The odd-number-th light receiving elements and the even-number-th light receiving elements in the line sensor 61K supply the generated charges corresponding to the pixels to the analog shift registers 63KO and 63KE via the shift gates 62KO and 62KE as a shift signal (SH-K). The analog shift registers 63KO and 63KE serially output, in synchronization with the transfer clocks CLK1 and CLK2, pixel information (OS-KO) as the charges corresponding to the odd-number-th pixels in the line sensor 61K and pixel information (OS-KE) as the charges corresponding to the even-number-th pixels. The pieces of pixel information (OS-KO and OS-KE) output by the analog shift registers 63KO and 63KE in synchronization with the transfer clocks CLK1 and CLK2 are respectively signals indicating a value of luminance in the odd-number-th pixels and a value of luminance in the even-number-th pixels.
The transfer clocks CLK1 and CLK2 are represented by one line in the configuration example shown in
Output timing of a signal from the line sensors 61R, 61G, and 61B and output timing of a signal from the line sensor 61K are explained.
As shown in
In the example shown in
The transfer clocks CKL1 and CLK2 are common to the line sensors 61R, 61G, and 61B and the line sensor 61K. Therefore, OS-R, OS-G, and OS-B output in synchronization with the transfer clocks CKL1 and CLK2 after both the SH-K signal and the SH-RGB signals are output are valid signals. However, OS-R, OS-G, and OS-B output in synchronization with the transfer clocks CLK1 and CLK2 after the SH-RGB signal is not output and only the SH-K signal is output are invalid signals.
Processing of signals output from the four-line CCD sensor functioning as the photoelectric conversion unit 21 is explained.
In the configuration example shown in
As shown in
The A/D conversion circuit 71 in the scanner-image processing unit 70 is input with the signals in the five systems. The A/D conversion circuit 71 converts the input signals in the five systems into digital data, respectively. The A/D conversion circuit 71 outputs the converted digital data to the shading correction circuit 72. The shading correction circuit 72 corrects signals from the A/D conversion circuit 71 according to a correction value corresponding to a reading result of a not-shown shading correction plate (a white reference plate). The shading correction circuit 72 outputs the signals subjected to shading correction to the inter-line correction circuit 73.
The inter-line correction circuit 73 corrects phase shift in the sub-scanning direction in the signals. An image read by a four-line CCD sensor shifts in the sub-scanning direction. Therefore, the inter-line correction circuit 73 corrects the shift in the sub-scanning direction. For example, the inter-line correction circuit 73 accumulates image data (digital data) read earlier in a line buffer and outputs the image data to be timed to coincide with image data read later. The inter-line correction circuit 73 outputs signals subjected to inter-line correction to the image-quality improving circuit 74.
The image-quality improving circuit 74 outputs three color signals set to high resolution on the basis of the five signals from the inter-line correction circuit 73. As explained above, in image data read by the photoelectric conversion unit 21, a monochrome (luminance) image signal has resolution higher than that of color image signals. It is assumed that color image data has resolution of 300 dpi (R300, G300, and B300) and monochrome (luminance) image data has resolution of 600 dpi (K600-O and K600-E) twice as high as that of the color image data. In this case, the image-quality improving circuit 74 generates 600 dpi color image data (R600, G600, and B600) on the basis of the 300 dpi color image data and the 600 dpi monochrome image data. The image-quality improving circuit 74 reduces noise and correct blur.
Signal processing (resolution increasing processing) in the image-quality improving circuit 74 is explained in detail.
In the following explanation, digital data corresponding to the signal OS-R indicating a red pixel value is referred to as R300, digital data corresponding to the signal OS-G indicating a green pixel value is referred to as G300, and digital data corresponding to the signal OS-B indicating a blue pixel value is referred to as B300, digital data corresponding to the signal OS-KO indicating the luminance of odd-number-th pixels is referred to as K600-O, and digital data corresponding to the signal OS-KE indicating the luminance of even-number-th pixels is referred to as K600-E.
First, a procedure for increasing the resolution of color image signals read by the line sensors 61R, 61G, and 61B to resolution equivalent to that of the line sensor 61K is explained.
In the following explanation, the left to right direction on the paper surface is the main scanning direction as an arrangement direction of light receiving elements (pixels) in a line sensor and the up to down direction on the paper surface is the sub-scanning direction (a moving direction of a carriage or a moving direction of an original document). The luminance image data (K600-O and K600-E) as pixel data from the line sensor 61K are image data rearranged in order of odd numbers and even numbers. Specifically, in the example shown in
The resolution of the monochrome line sensor 61K is twice as large as that of the color line sensors 61R, 61G, and 61B. This means that one pixel read by the color line sensors 61R, 61G, and 61B corresponds to four (=2×2) pixels read by the monochrome line sensor 61K. For example, a range of four pixels including K600(1,1), K600(1,2), K600(2,1), and K600(2,2) shown in
As an example, it is assumed that an image in which a cyan solid image and a magenta solid image are in contact with each other is read. It is assumed that a boundary between the cyan solid image and the magenta solid image is present in the center of a reading range as indicated by dotted lines in
Pixels {K600(1,1), K600(1,2), K600(1,3), K600(2,1), K600(2,2), K600(2,3), . . . , and K600(6,3)} located on the left side of the dotted line shown in
On the other hand, pixels {RGB300(1,1), RGB300(2,1), and RGB(3,1)} located on the left side of the dotted line shown in
As explained above, the line sensor 61K reads the cyan solid image in the eighteen pixels located on the left side in
As explained above, the A/D conversion circuit 71 converts pixel signals output from the light receiving elements of the line sensors into digital data (e.g., a 256-gradation data value indicated by 8 bits). As a pixel signal output by the light receiving elements is larger, digital data of the pixels has a larger value (e.g., a value closer to 255 in the case of 255 gradations). The shading correction circuit 72 sets a value of a pixel whiter than a white reference (a brightest pixel) to a large value (e.g., 255) and sets a value of a pixel blacker than a black reference (a darkest pixel) to a small value (e.g., 0).
In the following explanation, it is explained what kinds of values the respective line sensors output when the A/D conversion circuit 71 and the shading correction circuit 72 convert signals of pixels into 8-bit digital data.
When the cyan solid image is read, for example, the line sensor 61R, the line sensor 61G, and the line sensor 61B output data values “18”, “78”, and “157”, respectively. This means that, in reflected light from the cyan solid image, red components are small and blue components are large.
When the magenta solid image is read, for example, the line sensor 61R, the line sensor 61G, and the line sensor 61B output data values “150”, “22”, and “49”, respectively. This means that, in reflected light from the magenta solid image, red components are large and green components are small.
Pixels including both the cyan solid image and the magenta solid image have an output value corresponding to a ratio of the cyan solid image and the magenta solid image. In the example shown in
Specifically, an output value {R300(1,2), R300(2,2), and R300(3,2)} of the line sensor 61R is 84 (=(18+150)/2). An output value {G300(1,2), G300(2,2), and G300(3,2)} of the line sensor 61G is 50 (=(78+22)/2). An output value {B300(1,2), B300(2,2), and B300(3,2)} of the line sensor 61B is 103 (=(157+49)/2).
Among the pixels read by the line sensor 61K, as shown in
In
The line sensors 61R, 61G, and 61B have a detection range for two pixels of the line sensor 61K in the main scanning direction. Therefore, “3” and “4” on the abscissa of the graph shown in
In the graph shown in
Therefore, “5” and “6” on the abscissa of the graph shown in
If a signal of the boundary explained above is a signal as clear as the line sensor 61K signal, this image is high in quality. In order to realize such processing, the image-quality improving circuit 74 processes image data using a correlation between an output value (luminance data: monochrome image data) of the line sensor 61K and output values (color data: color image data) of the line sensors 61R, 61G, and 61B.
A relation between luminance data and color data is explained.
In general, luminance data (K data) can be calculated from color data (e.g., data of R, G, and B). On the other hand, the color data cannot be calculated from the luminance data. In other words, even if brightness (luminance data) of pixels in an image is set, color data (R data, G data, and B data) of the pixels cannot be determined. However, when a range of pixels is limited to a “certain range”, there is a specific relation between the color data and the luminance data. In such a range in which the specific relation holds, the color data can be calculated from the luminance data. The specific relation in the “certain range” is a correlation between the luminance data and the color data. If the correlation is referred to, it is possible to convert, using luminance data having high resolution (second resolution), color data having low resolution (first resolution) into color data having resolution equivalent to that of the luminance data. The image-quality improving circuit 74 improves the resolution of color image data on the basis of the correlation explained above.
A procedure of image-quality improving processing is explained below.
In the following explanation, image data used in the image-quality improving processing is color data in the 3×3 pixel matrix shown in
First, the image-quality improving circuit 74 calculates a correlation between color data (R data, G data, and B data) and luminance data (K data). In order to calculate the correlation, the image-quality improving circuit 74 converts the resolution of the luminance data into resolution same as that of the color data. When the luminance data has resolution of 600 dpi and the color data has resolution of 300 dpi, the image-quality improving circuit 74 converts the resolution of the luminance data into 300 dpi. The image-quality improving circuit 74 converts luminance data having high resolution into luminance data having resolution same as that of the color data by the following procedure, for example.
The image-quality improving circuit 74 associates pixels read by the line sensor 61K with pixels read by the line sensors 61R, 61G, and 61B. For example, the image-quality improving circuit 74 associates the pixels read by the line sensor 61K shown in
In the example explained above, the value of the luminance data of the cyan solid image is “88” and the value of the luminance data of the magenta solid image is “70”. The value (the average) of the luminance data of the 2×2 pixel matrix including the two pixels of the cyan solid image and the two pixels of the magenta solid image is “79 (=88+70+88+70)/4”. Therefore, luminance data equivalent to 300 dpi including the four pixels including the boundary of cyan and magenta has a value “79”.
As shown in
A correlation between the luminance data (the K data) and the color data (the R data, the G data, and the B data) is explained.
First, a correlation between the luminance data (the K data) and the red data (the R data) is explained.
As shown in
R−150=(150−18)/(70−88)*(K−70) (K-R)
R≡−7.33*K+663.3
The straight line KR shown in
A correlation between the K data (the luminance data) and the G data (the green data) is explained.
As in the case of the R data, the luminance data of the cyan solid image is “88” and the G data thereof is “78”, the luminance data of the magenta solid image is “70” and the G data thereof is “22”, and the luminance data obtained by reading the boundary of cyan and magenta is “79” and the G data thereof is “50”. Therefore, when the luminance data and the green data are represented as (K data, G data), three points (70, 22), (79, 50), and (88, 78) are arranged on a straight line KG. As shown in
G−22=(22−78)/(70−88)*(K−70) (K-G)
G≡3.11*K−195.8
As in the case of the R data, when the 600 dpi luminance data is substituted in “K” of Formula (K-G), 600 dpi G data is calculated. Therefore, concerning pixels in which 300 dpi G data is “50”, if the 600 dpi luminance data (K600) is “70”, G data equivalent to 600 dpi is “22” and, if the 600 dpi luminance data (K600) is “88”, the G data equivalent to 600 dpi is “78”.
A correlation between the K data (the luminance data) and the B data (the blue data) is explained.
As in the case of the R data or the G data, the luminance data of the cyan solid image is “88” and the B data thereof is “157”, the luminance data of the magenta solid image is “70” and the B data thereof is “49”, and the luminance data of the boundary where the cyan solid image and the magenta solid image are mixed is “79” and the B data thereof is “103”. When the luminance data and the blue data are represented as (K data, B data), as shown in
B−49=(49−157)/(70−88)*(K−70) (K-B)
B=6*K−371
As in the case of the R data or the G data, when the 600 dpi luminance data (K600) is substituted in “K” of Formula (K-B), 600 dpi B data is calculated. Therefore, concerning pixels in which the 300 dpi B data is “103”, if the 600 dpi luminance data is “70”, B data equivalent to 600 dpi is “49” and, if the 600 dpi luminance data is “88” G data equivalent to 600 dpi is “157”.
According to the calculation example based on the correlation explained above, as shown in
Specifically, in the processing result shown in
Image-quality improving processing for general image data is explained.
In the image-quality improving processing explained above, the resolution of the color data is increased to be higher than that of the original color data by using the luminance data (the monochrome data) having high resolution. The above explanation is explanation of a basic principle of the image-quality improving processing. In particular, the explanation is suitable when a correlation between luminance data and color data is arranged generally on one straight line. However, in actual image data, a correlation between luminance data and color data may not be arranged on a straight line.
Generalized processing of the image-quality improving processing is explained below.
In a configuration example shown in
The image-quality improving circuit 74 is input with 300 dpi R (red) data (R300), 300 dpi G (green) data (G300), 300 dpi B (blue) data (B300), luminance data of even-number-th pixels among 600 dpi pixels (K600-E), and luminance data of odd-number-th pixels among the 600 dpi pixels (K600-O).
The serializing circuit 81 converts the even-number-th luminance data (K600-E) and the odd-number-th luminance data (K600-O) into luminance data (K600), which is serial data. The serializing circuit 81 outputs the serialized luminance data (K600) to the resolution converting circuit 82 and the data converting circuit 84.
The resolution converting circuit 82 converts the 600 dpi luminance data (K600) into 300 dpi luminance data (K300). The resolution converting circuit 82 converts the resolution of 600 dpi into the resolution of 300 dpi. The resolution converting circuit 82 associates pixels of the 600 dpi luminance data (K600) and pixels of the 300 dpi color data. As explained above, the pixels of the 300 dpi color data correspond to the 2×2 pixel matrix including the pixels of the 600 dpi luminance data (K600). The resolution converting circuit 82 calculates an average (luminance data equivalent to 300 dpi (K300)) of the luminance data of 2×2 pixels forming the matrix corresponding to the pixels of the color data.
The correlation calculating circuit 83 is input with R300, G300, B300, and K300. The correlation calculating circuit 83 calculates a regression line of R300 and K300, a regression line of G300 and K300, and a regression line of B300 and K300. The regression lines are represented by the following formulas:
R300=Ar×K300+Br (KR-2)
G300=Ag×K300+Bg (KG-2)
B300=Ab×K300+Bb (KB-2)
Ar, Ag, and Ab represent slopes (constants) of the regression lines and Br, Bg, and Bb represent sections (constants) with respect to the ordinate.
Therefore, the correlation calculating circuit 83 calculates the constants (Ar, AG, Ab, Br, Bg, and Bb) as correlations between the luminance data and the color data. To simplify the explanation, a method of calculating the constants Ar and Br is explained on the basis of the luminance data (K300) and the color data (R300).
First, the correlation calculating circuit 83 sets nine pixels of 3×3 pixels as an area of attention. The correlation calculating circuit 83 calculates a correlation coefficient in the area of attention including the nine pixels. Luminance data and color data for the pixels in the area of attention of 3×3 pixels are represented as Kij and Rij. “ij” indicates variables 1 to 3. For example, R300(2,2) is represented as R22. When an average of K data (K300) of the area of attention is represented as Kave and an average of R data of the area of attention is represented as Rave, the correlation calculating circuit 83 calculates a correlation coefficient (Cr) of the K data and the R data according to the following formula:
According to this Formula, the correlation coefficient (Cr) is the same as a value obtained by dividing a sum of deviation products by a standard deviation of K and a standard deviation of R. The correlation coefficient (Cr) takes values from −1 to +1. When the correlation coefficient (Cr) is plus, this indicates that the correlation between the K data and the R data is a positive correlation. When the correlation coefficient (Cr) is minus, this indicates that the correlation between the K data and the R data is a negative correlation. The correlation coefficient (Cr) indicates that correlation is stronger as an absolute value thereof is closer to 1.
The correlation calculating circuit 83 calculates the slope (Ar) of the regression line of the luminance data (K) and the color data (R) according to the following formula. In the following formula, the ordinate represents R and the abscissa represents K:
Ar=Cr×((standard deviation of R)/(standard deviation of K))
The correlation calculating circuit 83 calculates a section (Br) according to the following formula:
Section (Br) of R=Rave−(slope×Kave)
The correlation calculating circuit 83 calculates the standard deviation of R and the standard deviation of K according to the following formulas, respectively:
standard deviation of R=(Σ(Rij−Rave)2/9)1/2
standard deviation of K=(Σ(Kij−Kave)2/9)1/2
Concerning the G data and the B data, the correlation calculating circuit 83 calculates slopes Ag and Ab and sections Bg and Bb in regression lines according to a method same as the method explained above. The correlation calculating circuit 83 outputs the calculated constants (Ar, Ag, Ab, Br, Bg, and Bb) to the data converting circuit 84.
The data converting circuit 84 calculates, using luminance data having high resolution, color data having resolution equivalent to that of the luminance data. For example, the data converting circuit 84 calculates 600 dpi color data (R600, G600, and B600) using the 600 dpi luminance data (K600). The data converting circuit 84 calculates R600, G600, and B600 using K600 according to the following formulas including the constants calculated by the correlation calculating circuit 83, respectively:
R600=Ar×K600+Br
G600=Ag×K600+Bg
B600=Ab×K600+Bb
Specifically, the data converting circuit 84 calculates 600 dpi color data (R600, G600, and B600) by substituting the 600 dpi luminance data (K600) in the above formulas, respectively.
The luminance data (K600) substituted in the above formulas is data for four pixels of 600 dpi 2×2 pixels equivalent to a pixel in the center of 300 dpi 3×3 pixels. For example, the luminance data K600 is equivalent to K600(3,3), K600(3,4), K600(4,3), and K600(4,4) shown in
As explained above, the image-quality improving circuit 74 converts, using the data of thirty-six pixels of the 600 dpi luminance data, one 300 dpi pixel located in the center of the nine pixels of the 300 dpi color data into the color data of four 600 dpi pixels. The image-quality improving circuit 74 carries out the processing for all the pixels. As a result, the image-quality improving circuit 74 converts the 300 dpi color data into the 600 dpi color data.
A correlation between the 600 dpi color data obtained as a result of the image-quality improving processing and the 600 dpi monochrome data is equivalent to the correlation between the 300 dpi monochrome data and the 300 dpi color data used for calculating the 600 dpi color data. Specifically, in a processing target range (in this processing example, 9×9 pixels at resolution of 600 dpi and 3×3 pixels at resolution of 300 dpi), when 300 dpi data has positive correlation, 600 dpi data also has positive correlation and, when the 300 dpi data has negative correlation, the 600 dpi data also has negative correlation.
In the image-quality improving processing according to this embodiment, it is possible to increase the resolution of color data having low resolution using luminance data having high resolution without image quality deterioration such as a fall in chroma or color mixture.
The area of attention (the certain range) for calculating a correlation between the luminance data and the color data is not limited to the area of 3×3 pixels and can be selected as appropriate. For example, as an area for calculating a correlation between the luminance data and the color data, an area of 5×5 pixels, 4×4 pixels, or the like may be applied. Resolutions of the color data and the luminance data to which the image-quality improving processing is applied are not limited to 300 dpi and 600 dpi, respectively. For example, the color data may have resolution of 200 dpi and the luminance data may have resolution of 400 dpi or the color data may have resolution of 600 dpi and the luminance data may have resolution of 1200 dpi.
According to the image-quality improving processing explained above, it is possible to obtain color image data having high resolution without deteriorating an S/N ratio of a color signal. If the image-quality improving processing is used, even when a monochrome image (luminance data) having high resolution is read by a luminance sensor having high sensitivity and a color image having resolution lower than that of the luminance sensor is read by a color sensor having low sensitivity, it is possible to increase the resolution of the color image to resolution equivalent to the resolution of the luminance sensor. As a result, it is possible to read the color image having high resolution at high speed. Even if an illumination light source used for reading the color image having high resolution has low power, it is easy to secure reading speed, resolution, and an S/N ratio. The number of data output from a CCD sensor can be reduced.
In the image-quality improving processing, color data is calculated with reference to K data using, for example, a correlation between plural K data and plural color data in a 300 dpi 3×3 pixel matrix. An effect that high-frequency noise is reduced can be obtained by calculating, using the data of the nine pixels in this way, color data of one pixel (four pixels at 600 dpi) in the center of the pixels. Usually, some noise (white noise) is carried on output of the CCD sensor. It is not easy to reduce the noise. In the image-quality improving processing, on the basis of a correlation between the nine pixels of the K data and the nine pixels of the color data, an image quality of data of one pixel located in the center of the pixels is improved.
Therefore, in the image-quality improving processing, even if unexpected noise is superimposed on one read pixel, it is possible to reduce the influence of the noise. According to an experiment, an effect of reducing high-frequency noise in reading an original document having uniform density to about a half to one third is obtained. Such an effect is useful in improving a compression ratio in compressing a scan image. In other words, the image-quality improving processing is not only useful for increasing resolution but also useful as noise reduction processing.
The image-quality improving processing reduces color drift caused by, for example, a mechanism for reading an image. For example, in the mechanism for reading an image, it is likely that color drift is caused by vibration, jitter, and chromatic aberration of a lens. In an image reading apparatus in which R, G, and B color line sensors independently read an image and independently output data of the image, in order to prevent color drift using a physical structure, it is necessary to improve the accuracy of a mechanism system or adopt a lens without aberration. In the image-quality improving processing, all color data are calculated with reference to luminance data. Therefore, in the image-quality improving processing, phase shift of the color data due to jitter, vibration, and chromatic aberration is also corrected. This is also an effect obtained by calculating data of pixels in an area of attention from a correlation among plural image data.
As explained above, in the image reading apparatus, when it is unnecessary to increase the resolution of the color data or even when the resolution of the luminance sensor and the resolution of the color sensor are the same, it is possible to correct a read image to a high-quality image without phase shift by applying the image-quality improving processing to the image. Such correction processing can be realized by a circuit configuration shown in
Second image-quality improving processing is explained.
The second image-quality improving processing explained below is another example of the image-quality improving processing by the image-quality improving circuit 74.
An image of an original document to be read may include an image of a frequency component close to reading resolution (300 dpi) of color image data. When reading resolution (a sampling frequency) and a frequency component included in an image to be read are close to each other, interference fringes called moiré may occur in image data obtained as a reading result. For example, when a monochrome pattern image in a certain period (e.g., 150 patterns per inch) (hereinafter also referred to as an image having the number of lines near 150) is read by a 300 dpi color sensor, it is likely that an image of a striped pattern (moiré) occurs in 300 dpi color image data.
The image of the striped pattern (moiré) is caused when an area in which a pixel value substantially changes (fluctuates) and an area in which a pixel value hardly changes (is uniform) periodically appear according to a positional relation between light receiving elements in a color sensor and a monochrome pattern to be read. However, when the image having the number of lines near 150 is read by a 600 dpi monochrome sensor, moiré does not occur in 600 dpi monochrome image data. When the 600 dpi monochrome image data is converted into monochrome image data having 300 dpi, moiré occurs in the 300 dpi monochrome image data as in the 300 dpi color image data.
In
As shown in
When the moiré occurs, a portion not having a change and not to be resolved (without contrast, i.e., without response) is present in the 300 dpi image data. To form the regression line shown in
In the unstable state, the slope of the regression line substantially changes according to a slight change in image data due to an external factor such as vibration (jitter) caused by movement of an original document during reading or movement of a carriage. In image-quality improving processing performed by using the regression line calculated in the unstable state, irregularity occurs in an image. For example, in the image-quality improving processing performed by using the regression line calculated in the unstable state, it is likely that, at a period in which moiré occurs, various colors occur in an image that should be monochrome (achromatic).
In the second image-quality improving processing, in order to prevent the phenomenon explained above, it is checked whether an image in an area of attention has a frequency component that causes moiré (e.g., a frequency component having the number of lines near 150). When the image in the area of attention does not include the frequency component that causes moiré, in the second image-quality improving processing, image-quality improving processing by the circuit shown in
A second image-quality improving circuit 101 that performs the second image-quality improving processing is explained.
As shown in
The first resolution increasing circuit 111 has a configuration same as that of the image-quality improving circuit 74 shown in
The second resolution increasing circuit 112 increases the resolution of color data with processing (second resolution increasing processing) different from that of the first resolution increasing circuit 111. The second resolution increasing circuit 112 increases the resolution of image data including the frequency component that causes moiré. In other words, the resolution increasing processing by the second resolution increasing circuit 112 is processing also applicable to the image data including the frequency component that causes moiré. For example, the second resolution increasing circuit 112 increases the resolution of the color data by superimposing a high-frequency component of the monochrome data on the color data. The second resolution increasing circuit 112 is explained in detail later.
The determining circuit 113 determines whether an image to be processed has the frequency component that causes moiré (e.g., the frequency component having the number of lines near 150). Determination processing by the determining circuit 113 is explained in detail later. The determining circuit 113 outputs a determination result to the selecting circuit 114. For example, when the determining circuit 113 determines that the image to be processed is not an image having the number of lines near 150, the determining circuit 113 outputs a determination signal to the selecting circuit 114 that selects a processing result of the first resolution increasing circuit 111. When the determining circuit 113 determines that the image to be processed is the image having the number of lines near 150, the determining circuit 113 outputs a determination signal for selecting an output signal from the second resolution increasing circuit 112 to the selecting circuit 114.
The selecting circuit 114 selects, on the basis of the determination result of the determining circuit 113, the processing result of the first resolution increasing circuit 111 or the processing result of the second resolution increasing circuit 112. For example, when the determining circuit 113 determines that the image to be processed does not include the frequency component that causes moiré, the selecting circuit 114 selects the processing result of the first resolution increasing circuit 111. In this case, the selecting circuit 114 outputs the color data, the resolution of which is increased by the first resolution increasing circuit 111, as a processing result of the image-quality improving circuit 101. When the determining circuit 113 determines that the image to be processed includes the frequency component that causes moiré, the selecting circuit 114 selects the processing result of the second resolution increasing circuit 112. In this case, the selecting circuit 114 outputs the color data, the resolution of which is increased by the second resolution increasing circuit 112, as a processing result of the image-quality improving circuit 101.
Determination processing by the determining circuit 113 is explained.
As explained above, in the second image-quality improving processing, it is checked whether the image in the area of attention has the frequency component that causes moiré (e.g., the image having the number of lines near 150). The determining circuit 113 checks (determines), according to a method explained later, whether the image in the area of attention includes the frequency component that causes moiré.
The determining circuit 113 calculates a standard deviation (a degree of fluctuation) of luminance data (K data) as 600 dpi monochrome image data. As in the processing explained above, the determining circuit 113 calculates the standard deviation in a 6×6 pixel matrix (i.e., thirty-six pixels) in the 600 dpi luminance data (K600). A standard deviation of the 600 dpi luminance data is set to 600 std.
The determining circuit 113 converts the 600 dpi luminance data into 300 dpi luminance data. As a standard deviation of the 300 dpi luminance data after the conversion, the determining circuit 113 calculates a standard deviation of a 3×3 pixel matrix (i.e., nine pixels) in an area equivalent to the 6×6 pixel matrix in the 600 dpi luminance data (K600). A standard deviation of the 300 dpi luminance data is set to 300 std.
In general, a standard deviation is an index indicating a state of fluctuation of data. Therefore, the determining circuit 113 obtains the following information on the basis of the standard deviation (600 std) for the 600 dpi luminance data and the standard deviation (300 std) for the 300 dpi luminance data.
The determining circuit 113 determines whether the image to be processed is the image including the frequency component that causes moiré (i.e., the image having the number of lines near 150). In an example shown in
In the determining circuit 113, a determination reference value α for 300 std/600 std is set as a determination reference. The determining circuit 113 determines whether a value of 300 std/600 std is equal to or smaller than the determination reference value α (300 std/600 std≦α). A value of “300 std/600 std” is smaller as 600 std is relatively large with respect to 300 std (as 600 std is larger or 300 std is smaller). In other words, it is determined that, as the value of “300 std/600 std” is smaller, the image to be processed is more likely to be the image including the frequency component that causes moiré (i.e., the image having the number of lines near 150). Therefore, if 300 std/600 std≦α, the determining circuit 113 determines that the image to be processed is likely to be the image having the number of lines near 150. According to an experiment, it is known that the image having the number of lines near 150 can be satisfactorily extracted by setting a value of α as the determination reference value to a value of about 0.5 to 0.7 (50% to 70%).
The second resolution increasing circuit 112 is explained.
The second resolution increasing circuit 112 increases the resolution of color data by superimposing a high-frequency component of monochrome data on the color data. The second resolution increasing circuit 112 does not perform processing for increasing resolution using a correlation between color data and luminance data. Content of processing for increasing resolution of the second resolution increasing circuit 112 is different from that of the first resolution increasing circuit 111.
As shown in
The serializing circuit 121 converts even-number-th luminance data (K600-E) and odd-number-th luminance data (K600-O) into luminance data (K600), which is serial data. The serializing circuit 121 outputs the serialized luminance data (K600) to the resolution converting circuit 122 and the superimposition-rate calculating circuit 123.
The resolution converting circuit 122 converts 600 dpi luminance data (K600) into 300 dpi luminance data (K300). The resolution converting circuit 122 converts resolution of 600 dpi into resolution of 300 dpi. The resolution converting circuit 122 associates pixels of the 600 dpi luminance data (K600) and pixels of 300 dpi color data. The pixels of the 300 dpi color data correspond to a 2×2 pixel matrix including the pixels of the 600 dpi luminance data (K600). The resolution converting circuit 122 calculates, as luminance data equivalent to 300 dpi (k300), an average of luminance data of 2×2 pixels forming the matrix corresponding to the pixels of the color data.
The superimposition-rate calculating circuit 123 is explained.
The superimposition-rate calculating circuit 123 calculates a rate for superimposing a frequency component of monochrome data on color data.
Superimposition rate calculation processing is explained with reference to examples shown in
The superimposition-rate calculating circuit 123 extracts four pixels (a 2×2 pixel matrix) in 600 dpi monochrome data corresponding to one 300 dpi pixel. For example, the superimposition-rate calculating circuit 123 extracts the 600 dpi luminance data for the four pixels forming the 2×2 pixel matrix shown in
The superimposition-rate calculating circuit 123 calculates an average K600ave for the luminance data for the four 600 dpi pixels corresponding to the one 300 dpi pixel. For example, the superimposition-rate calculating circuit 123 calculates the average K600ave according to the following formula:
K600ave=(K600(1,1)+K600(1,2)+K600(2,1)+K600(2,2))/4
After calculating the average K600ave, the superimposition-rate calculating circuit 123 calculates a rate of change Rate(*,*) for the average K600ave of pixels (*,*) In other words, rates of change of the 600 dpi pixels indicate contrast ratios of the pixels to an area of attention (the 2×2 pixel matrix). For example, the superimposition-rate calculating circuit 123 calculates rates of change Rate(1,1), (1,2), (2,1), and (2,2) in K600(1,1), (1,2), (2,1), and (2,2) according to the following formulas:
Rate(1,1)=K600(1,1)/K600ave
Rate(1,2)=K600(1,2)/K600ave
Rate(2,1)=K600(2,1)/K600ave
Rate(2,2)=K600(2,2)/K600ave
The superimposition-rate calculating circuit 123 outputs rates of change Rate(*,*) corresponding to the 600 dpi pixels K600(*,*) calculated by the procedure explained above to the data converting circuit 124.
The data converting circuit 124 multiplies the 300 dpi color data corresponding to the pixels equivalent to 600 dpi with the rates of change corresponding to the pixels input from the superimposition-rate calculating circuit 123.
For example, the data converting circuit 124 calculates the R data (R600) equivalent to 600 dpi by multiplying R300 with the rates of change corresponding to the pixels equivalent to 600 dpi as indicated by the following formulas:
R600(1,1)=R300*Rate(1,1)
R600(1,2)=R300*Rate(1,2)
R600(2,1)=R300*Rate(2,1)
R600(2,2)=R300*Rate(2,2)
According to such superimposition processing, the data converting circuit 124 converts R300 shown in
The data converting circuit 124 calculates G data (G600) equivalent to 600 dpi by multiplying G300 with rates of change corresponding to the pixels equivalent to 600 dpi as indicated by the following formulas:
G600(1,1)=G300*Rate(1,1)
G600(1,2)=G300*Rate(1,2)
G600(2,1)=G300*Rate(2,1)
G600(2,2)=G300*Rate(2,2)
According to such superimposition processing, the data converting circuit 124 converts B300 shown in
The data converting circuit 124 calculates B data (B600) equivalent to 600 dpi by multiplying B300 with rates of change corresponding to pixels equivalent to 600 dpi as indicated by the following formulas:
B600(1,1)=B300*Rate(1,1)
B600(1,2)=B300*Rate(1,2)
B600(2,1)=B300*Rate(2,1)
B600(2,2)=B300*Rate(2,2)
According to such superimposition processing, the data converting circuit 124 converts B300 shown in
As explained above, the image-quality improving circuit 101 is input with high-resolution monochrome data and low-resolution color data. The image-quality improving circuit 101 includes the first resolution increasing circuit 111 that performs the first resolution increasing processing for increasing the resolution of the color data on the basis of a correlation between the color data and the monochrome data and the second resolution increasing circuit 112 that performs the second resolution increasing processing for increasing the resolution of the color data by superimposing a high-frequency component of the monochrome data on the color data. The image-quality improving circuit 101 outputs a processing result of the second resolution increasing circuit 112 when an image to be processed is an image having a component close to a frequency component that causes moiré at the resolution of the input color data and outputs a processing result of the first resolution increasing circuit 111 when the image to be processed is other images. Such an image-quality improving circuit 101 can output satisfactory high-resolution image data regardless of what kind of image an image of an original document is.
In the processing example explained above, the processing closed in the four 600 dpi pixels corresponding to the one 300 dpi pixel is explained. However, if the image-quality improving processing is performed for each of the four 600 dpi pixels (one 300 dpi pixel), it is likely that continuity among adjacent pixels falls in the entire image. In order to secure continuity among the adjacent pixels in the entire image, it is preferable to execute the image-quality improving processing for the second time after phase-shifting an image area to be processed (an area of attention) by one pixel. For example, the second image-quality improving processing, an image area to be processed (a 2×2 pixel matrix) in 600 dpi image data is set while being phase-shifted by one pixel. In 600 dpi color image data as a result of such re-processing, continuity among adjacent pixels is secured.
First, the image-quality improving circuit 74 or the image-quality improving circuit 101 sets four pixels (a 2×2 pixel matrix) of K600(1,1), K600(1,2), K600(2,1), and K600(2,2) as image area to be processed (a first area of attention). In this case, the image-quality improving circuit 74 or 101 increases the resolution of color data (R300, G300, and B300) corresponding to the first area of attention. As a result of the processing, the image-quality improving circuit 74 or 101 obtains 600 dpi color image data R600(1,1), R600(1,2), R600(2,1), R600(2,2), G600(1,1), G600(1,2), G600(2,1), G600(2,2), B600(1,1), B600(1,2), B600(2,1), and B600(2,2).
The image-quality improving circuit 74 or 101 performs the image-quality improving processing in an entire image with the four pixels (the 2×2 pixel matrix) corresponding to 300 dpi color data set as the image area to be processed (the first area of attention) in order. The image-quality improving circuit 74 or 101 obtains 600 dpi color data for the entire image including 600 dpi color data generated for each image area to be processed (the first area of attention).
After generating the 600 dpi color data in the entire image area, the image-quality improving circuit 74 or 101 performs processing for improving continuity among adjacent pixels. As the processing for improving continuity among adjacent pixels, the image-quality improving circuit 74 or 101 sets an area phase-shifted by one pixel from the first area of attention as an image area to be processed for the second time (a second area of attention). The image-quality improving circuit 74 or 101 applies the second image-quality improving processing to the image area to be processed for the second time.
For example, as shown in
After calculating R300′ corresponding to the second area of attention, the image-quality improving circuit 74 or 101 increases the resolution of R300′ for the second time with luminance data for four pixels of the second area of attention {K600(2,2), K600(2,3), K600(3,2), and K600(3,3)}. Specifically, the image-quality improving circuit 74 or 101 calculates R600(2,2), R600(2,3), R600(3,2), and R600(3,3) for the second time with luminance data for four pixels (K600) in the second area of attention and R300′ corresponding to the second area of attention.
The image-quality improving circuit 74 or 101 also applies the processing for the second area of attention to G data and B data. According to such processing, the image-quality improving circuit 74 or 101 can impart continuity among adjacent pixels in the entire image data increased in resolution.
Additional advantages and modifications will readily occur to those skilled the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
This application claims the benefit of U.S. Provisional Application No. 61/073,997, filed Jun. 19, 2008.
Number | Date | Country | |
---|---|---|---|
61073997 | Jun 2008 | US |