This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2010-65224, filed on Mar. 19, 2010, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to an image processing system and an image scanning apparatus which sharpen an image obtained from a line sensor.
A scanner, a facsimile machine, a multifunction peripheral (MFP) and a copying machine are respectively provided with an image scanning apparatus to read a paper document. Some image scanning apparatuses have a contact image sensor (CIS) which is an imaging device employing a contact optical system.
The contact image sensor has a lens array and a line sensor, and reads a document line by line. The CIS is designed so as to bring a document on a scanner platen into focus. Accordingly, when a gap exists between the document and the platen, a scanned image is blurred.
Japanese Patent No. 3028518 discloses a device to perform image sharpening by utilizing a filter coefficient determined by the following steps. Following plural pairs of images are obtained for plural documents, respectively. One of each of the pairs is an image scanned under an ideal condition. The other of each of the pairs is an image scanned under a condition that the documents are intentionally floated. By using the pairs of images, variation of frequency characteristics of the images is estimated. The variation is caused due to gaps between the documents and a platen. Then, a filter coefficient is obtained in order to cancel the variation of frequency characteristic.
In one embodiment, an image processing system is provided. The image processing system is provided with an image data acquisition unit, a filter setting unit and a filtering processor. The image data acquisition unit acquires pixel data of first image data of an erect image obtained by a line sensor comprising a plurality of light receiving elements. The erect image is formed by a plurality of gradient index lenses of a lens array. The filter setting unit sets sharpening filter coefficients of a filter to be applied to the pixel data of the first image data corresponding to the respective light receiving elements, according to remainder values obtained respectively by dividing positions of the light receiving elements on the line sensor by an interval of a lens arrangement of the lens array. The filtering processor generates second image data sharpened by applying the obtained sharpening filter coefficients to the respective pixel data of the first image data.
In another embodiment, an image scanning apparatus is provided. The image scanning apparatus is provided with a distance sensor, a scanner and an image processing system. The image processing system is provided with an image data acquisition unit, a filter setting unit and a filtering processor.
The distance sensor measures the distance between an object and a reference plane. The scanner has a line sensor to scan an image. The image data acquisition unit acquires pixel data of first image data of an erect image obtained by a line sensor comprising a plurality of light receiving elements arranged. The erect image is formed by a plurality of gradient index lenses of a lens array.
The filter setting unit sets sharpening filter coefficients of a filter to be applied to the pixel data of the first image data corresponding to the respective light receiving elements, according to remainder values obtained respectively by dividing positions of the light receiving elements on the line sensor by an interval of a lens arrangement of the lens array. The filtering processor generates second image data sharpened by applying the obtained sharpening filter coefficients to the respective pixel data of the first image data.
Hereinafter, further embodiments will be described with reference to the drawings. In the drawings, the same numerals denote the same or similar portions, respectively.
A first embodiment will be described with reference to
As illustrated in
The image data acquisition unit 101 obtains first image data as pickup data from a contact image sensor of a scanner, which will be described below. The filter setting unit 102 sets a sharpening filter. The filtering processor 103 obtains a second image data as sharpened image data by performing processing of the first image data with the sharpening filter.
A contact image sensor 303 scans one line of an image, and then, moves to a position to scan the next line. By repeating the scanning and moving, pickup data of the entire document can be obtained. Hereinafter, the moving direction of the contact image sensor 303 is called as a principal scan direction. In
The image sensor 401 has plural elements, i.e. light receiving elements 400 to obtain image data of one line. The lenses 402 form an erect image on the image sensor 401. The image sensor 401 of the embodiment is a line sensor having the elements arranged on a line.
The lenses 402 of the embodiment are gradient index (GRIN) lenses. The lenses 402 form an erect same-size image on the image sensor 401 respectively. Hereinafter, the arrangement direction of the elements of the image sensor 401 and the lenses 402 is called as a sub-scan direction.
The filter setting unit 102 of
In
The image quality deterioration of the contact image sensor 303 can be characterized by a point spread function (hereinafter, abbreviated as “PSF”). The PSF depends on shapes of the lenses 402 seen from the light receiving elements 400.
A distance L501 from the reference position B to the center C501 of the element 501 is 2 p. The center C501 of the element 501 is located at a position having a distance Δ501 from the left end of the lens 502. Since the reference position B and the left end of the lens 502 is matched in
Similarly, since a distance Δ503 from the left end of the lens 504 to the center C503 of the element 503 is 2 p, a distance d503 from the center C504 of the lens 504 to the center C503 of the element 503 is 0.25 p as well. The center C504 of the lens 504 is located on the element 503. Accordingly, the shape of the lens 502 seen from the element 501 is matched with the shape of the lens 504 seen from the element 503. In this case, the PSF obtained for the element 501 is matched with the PSF obtained for the element 503.
On the other hand, since a distance Δ506 from the left end of the lens 507 to the center C506 of the element 506 is 2.5 p, a distance d506 from the center C507 of the lens 507 to the center C506 of the element 506 is 0.75 p. The center C507 of the lens 507 is not located on the element 506. Accordingly, the shape of the lens 507 seen from the element 506 is different from the shape of the lens 502 seen from the element 501. In this case, the PSF obtained for the element 506 is different from the PSF obtained for the element 501.
The image data acquisition unit 101 of
In the first image data, since data obtained from respective elements 400 of the image sensor 403 are arranged according to a certain rule, each of the elements that acquires the signal of each pixel can be specified.
The filter setting unit 102 calculates a remainder value Δi by dividing the relative position Li, for example, L501, L503 or L506 respectively of the element i, for example, the elements 501, 503 or 506 against the reference position B of
The reference position B that is the left end of the lens 502 is not always located at the left end of the image sensor as illustrated in
Further, due to occurrence of the gaps 8 between the respective image sensor units 401a as illustrated in
When the relative position Li is correctly obtained by performing the above measures, the distance Δi can be defined as the following equation.
Δi=Li−R×F(Li/R) (1)
In the equation (1), “F” denotes a function. The function “F” drops the fractional portion of Li/R. The value range of the distance Δi obtained by the equation (1) is to be equal to or larger than zero and smaller than R. For plural ones of the elements, i.e. the pixels having the same distance Δi, the same PSF can be utilized. In the example of
Then, the filter setting unit 102 outputs the filter coefficient corresponding to the distance Δi and the floating amount “h” (step 3). The floating amount “h” is assumed to be measured by the distance sensor 305 previously. As the distance sensor 305, an infrared ray displacement sensor using a triangulation method may be utilized. An inverted filter coefficient calculated from the PSF corresponding to each distance Δi may be utilized for calculation of the filter coefficient corresponding to the distance Δi. When the lens arrangement is fixed according to the floating amount “h” and the distance Δi, the PSF can be estimated by utilizing a ray-tracing method, for example.
The filtering processor 103 of
Generally, a filter is designed with an assumption that elements i.e. pixels are arranged at constant intervals. Accordingly, in the case that a filtering result is obtained corresponding to the position of the element 905, for example, filtering can be performed without a problem by using imaging data obtained from the elements 903, 904, 905, 906 and 907. However, any element does not exist at a position apart from the element 906 by 2 p in the direction opposite to the element 904, so that filtering cannot be performed. In order to resolve the problem, it is sufficient only to interpolate a signal from the position apart from the element 906 by 2 p in the direction opposite to the element 904. For example, an average of the signals from the elements 907, 908 may be used as a pixel signal from the position where any element i.e. any pixel does not exist.
For performing such pixel signal interpolation, the interval between the image sensor units 901, 902 is integer times of the element pitch i.e. the pixel pitch p, desirably. In the case that the interval between the elements 907, 908 is 1.5 p, for example, interpolation processing of two times is required to perform filtering of the pixel 907. This is because any element does not exist at a position apart from the element 907 by 1 p in the direction opposite to the element 906 as well as at the position apart by 2 p.
The above processing may be unnecessary in the case that the gaps 6 of the image sensor units is interpolated at a contact image sensor side as a imaging device or in the case that calculation of an inverted filter coefficient is executable according to the pixel arrangement.
As described above, in the image processing system according to the first embodiment, filtering is performed using the same filter coefficient for plural points having the same interval as the diameter R of the lenses 502, 507 and 504. Otherwise, filtering is performed using a different filter coefficient. In this manner, filtering processing is performed according to PSF variation depending on scanning position, so that image quality of a reproduced result i.e. a reproduced image can be enhanced.
In the above description of the embodiment, an image sensor having lenses arranged on one line is used, for an example, as illustrated in
An image sensor having lenses arranged on two lines may be used, as illustrated in
The filter setting unit 102 of
In the above embodiment, as an offset to be added to the relative position Li of an element i, a measurement result of the relative position of an image sensor and lenses is used. The filter setting unit may store an offset in a memory provided in a filter coefficient selecting unit which will be described below, and may use the offset stored in the memory. The offset is one which brings a favorable image quality. Such an offset may be determined by verifying image-processed results while varying the value of the offset at a stage when a contact image sensor as an imaging device and an image processing system are connected.
Calculation of the inverted filter coefficient described in the above embodiment requires a high calculation cost. It is also possible that a filter setting unit 102 previously calculates and stores inverted filter coefficients as a table and brings up an appropriate filter coefficient depending on an inputted distance Δi in order to reduce a calculation cost.
The filter coefficient table 701 stores N pairs of previously calculated inverted filter coefficients and the corresponding distances Δi. The filter coefficient selecting unit 702 selects an appropriate filter coefficient from the filter coefficient table 701, based on an inputted pixel position x, an offset, and data from a distance input unit 704 which receives a measurement result from the distance measurement sensor 305. The filter coefficient table 701 can store filter coefficients for N pieces of filters. The filter coefficient corresponding to the distance Δi(=(k−1)×R/N) is stored in a k-th block of the table. In the above formula, “N” and “k” are positive integers.
The filter setting unit may include plural filter coefficient tables 701a as illustrated with broken lines in
The filter coefficient tables 701a is capable of storing filter coefficients corresponding to different floating amounts, lens diameters i.e. intervals of a lens arrangement, pixel sizes or arrangement intervals of elements, and arrangement intervals of image sensor units in advance. Thus, the filter coefficient tables 701a allows the image processing system to perform filtering appropriately even when various floating amounts occur or the contact image sensor is made by a different design.
The design parameter acquisition unit 703 can obtain the different floating amounts, the lens diameter i.e. the interval of the lens arrangement, the pixel sizes or the arrangement intervals of elements, and the arrangement intervals of the image sensor units, and provides them to the filter coefficient selecting unit 702, so that an appropriate table can be selected from the tables 701a.
The filter setting unit may previously learn a filter coefficient to minimize the square error between data of an image of a document scanned under an ideal condition and a filtering result of an image of the same document scanned under a condition that the document is intentionally floated.
Plural filter coefficients whose relation with the distance Δi is learned previously may be stored in the filter coefficient table 701 of
The filter coefficient selecting unit 702 of
The image processing system of the above embodiment can be also realized by using a computer for general use as basic hardware. The image data acquisition unit 101 may be an external interface of the computer, and the filter setting unit 102 and the filtering processor 103 may be realized by executing a program with a processor installed in the computer. At that time, the image processing system may be realized also by installing the program in the computer in advance. Instead, the image processing system can be realized by installing the program in the computer appropriately, for example, by storing the program in a storage medium such as a CD-ROM or by distributing the program via a network.
Further, the filter setting unit 102 and the filtering processor 103 can be realized by appropriately utilizing a storage medium attached internally or externally to the computer, such as a memory, a hard disk, a CD-R, a CD-RW, a DVD-RAM and a DVD-R.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel systems and apparatuses described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the systems and apparatuses described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2010-65224 | Mar 2010 | JP | national |