None.
None.
None.
1. Field of the Invention
The present invention generally relates to the calibration of scanning systems, such as those found in host-based scanners, all-in-one (AIO) devices, and the like.
2. Description of the Related Art
When scanning an image using a host-based scanner or multi-functional device such as an all-in-one (AIO), it is necessary to compensate for imperfections in the scanning system in order to accurately reproduce the target image. Two characteristics of CCD based image sensors contained in scanners that require such compensation are dark signal non-uniformity (DSNU) and photo response non-uniformity (PRNU). DSNU refers to the pixel-to-pixel variation in a CCD array to the detected black level or zero light present level. PRNU refers to the pixel-to-pixel variation in a CCD array to the detected white level or fixed intensity light level. Failure to properly compensate for these imperfections results in visual artifacts such as vertical streaking and parasitic light areas in dark regions of the reproduced image.
Several methods exist to compensate for PRNU and DSNU. For DSNU, an offset value can be recorded for each pixel (CCD element) during calibration by taking a sample scan with the light off or by scanning a black calibration strip. The recorded offset values, commonly known as black-level offsets, can then be subtracted off of each incoming pixel to properly adjust the black level of each pixel. For optimal quality, this operation can be performed pixel-to-pixel on the analog representation of the pixel before it is digitized. To increase performance and reduce system cost/complexity, this pixel-to-pixel compensation can be performed after digitization at the expense of some quality. Some scanners apply a single average offset to all pixels to further minimize cost/complexity and maximize performance. For PRNU, a gain value can be recorded for each pixel during calibration by scanning a white calibration strip. The recorded gain values, commonly known as white-level gains, can then be used to multiply each incoming pixel by the appropriate gain factor to stretch the output to the appropriate intensity level. As with DSNU compensation, this can be performed pixel-to-pixel in the analog or digital domain with some systems performing a universal gain on all pixels at the expense of quality.
Performing these pixel-to-pixel corrections in the analog domain is often unrealistic for today's end-user scanners due to the cost and performance constraints of these products. It is common to utilize a single average offset and a single average gain for all pixels in the analog domain to maximize the dynamic range of the A/D converter in the scanner's analog front-end (AFE). This is often followed by pixel-to-pixel corrections in the digital domain to correct for the CCD element variations. Lower quality scanners bypass the pixel-to-pixel correction altogether because of cost and performance limitations. A 9″ wide 600-ppi scanner will require 5400 offset and gain values per color (red, green, and blue) to be stored during calibration and used for each incoming scan line. If each offset and gain value is one byte each, this results in over 31 KB of data that must be stored and applied to each incoming scan line. This often becomes the performance bottleneck in scanners, especially in all-in-one devices where the scan data must be processed to print data during a copy. In such devices, a tremendous amount of data must be pushed into and out of a single memory to complete a scan-to-print operation. Requiring 31 KB of data to be read from main memory for each scan line can limit scan and print speed due to the amount of memory bandwidth required. Utilizing a local memory to store the calibration data can increase the cost of the ASIC substantially due to the size of such a memory. This requirement increases as scan resolution increases making the problem worse as scanner technology advances.
The present invention is directed to a system and method for reducing the memory requirement for offset and gain calibration to relieve the size/performance bottleneck in scanner systems. The resulting methodology produces visually equivalent scanned results with a substantial increase in performance, which results in a shorter amount of time required to output a first copy in, for example, an all-in-one device.
Specifically, the present invention includes a method for reducing the scanner calibration data from, in one embodiment, 33% to 83% in size. The resulting image is visually equivalent to a scanned image compensated with non-compressed scanner calibration data. Since the calibration step is often the bottleneck in scanner performance, this method noticeably speeds up scan and copy time. Implementing the decompression in hardware requires a minimal amount of hardware overhead and complexity. Thus, this method has a minimal impact on the size and cost of the scanner controller (e.g., ASIC—application specific integrated circuit). Since compression only takes place at most once per scan, this added step has no significant impact on the overall scan time. By allowing dynamic grouping of pixels using a single calibration packet, the quality of the compensation can be optimized with the size of the compensation data being minimized. By adding the ability to shift the compressed deviation stored in the calibration packet, the range of the pixel-to-pixel deviation can be increased without impacting the size of the calibration data. This flexibility makes this invention applicable to future image sensors that may have widely varying deviations in pixel-to-pixel offset and gain values.
These and other aspects will become apparent from the following description of the invention, although variations and modifications may be effected without departing from the spirit and scope of the novel concepts of the present disclosure.
A white light source 101 such as a fluorescent bulb is used to illuminate a line of the target image 102. This type of light source contains the red, green, and blue wavelengths of light. The light reflects off of the target image and is directed through a series of optical elements 103 that shrink the image down to the size of the small image sensor 104.
The image sensor 104 typically contains three rows of elements. As shown in
Today's scanners typically capture 36 to 48-bits of digital data from the AFE 105 then convert this down to a 24-bit image. The other piece of the scanner not shown in
One of the three light sources, red 301R, green 301G, or blue 301B, is turned on exposing the target image 302 to that particular wavelength of light. The light bounces off of the target image 302 and exposes a single line of image sensors 304. This sensor 304 has no color filter on it, so it is used for all three light sources. The sensor 304 charges up and is shifted into the AFE 305 where it is digitized and sent to a controller ASIC 306. The next light source turns on and the process repeats.
Note that three scans are required for each line corresponding to turning on each light source one at a time. For optical reduction scanners, one scan is required for each line since there is a single white light source and three filtered image sensor lines.
The AFE typically contains calibration values to set the analog white and black point to the A/D converter. This may be a single value for offset and a single value for gain that is applied to every pixel in the line (usually, there is a unique offset and gain value for each color resulting in 6 total values). This is done to maximize the dynamic range of the A/D converter. The digital controller ASIC typically contains access to pixel-to-pixel calibration values for the digital white and black points for each pixel in a line. This step corrects for the non-uniformity of the image sensors/optics/illumination from pixel-to-pixel to normalize the captured line response. The pixel-to-pixel calibration values for the digital white and black points are one of the aspects of the present scanner calibration invention.
The following paragraphs describe in further detail a method for reducing the amount of data that must be stored into memory for performing visually equivalent photo response non-uniformity (PRNU) and dark signal non-uniformity (DSNU) compensation pixel-to-pixel after digitization of the scan data.
An example prior art flow diagram for calibrating a scanner to compensate for PRNU and DSNU and applying the compensation for DSNU and PRNU is shown in
Beginning with step 401, the scan begins. In step 402, one or more lines of black data or one or more lines with the light off are scanned. In step 403, pixel-to-pixel offset data are calculated and stored to memory in step 404. In some cases, a calculation of this offset data is not required because, for example, if a black line is being scanned and if the scanned data is expected to have value of 0 but instead has a value of 2 then the offset becomes the scanned value or, in this example, 2. In step 405, one or more lines of white data are scanned. In step 406, pixel-to-pixel gain data are computed, and stored to memory or a buffer in step 408.
In step 409, a line of the target image is scanned, and in step 410, the pixel-to-pixel offset data is subtracted from the scanned line of the target image. In step 411, pixel-to-pixel multiplication using the gain data is performed on the scanned line of the target image. In step 412, a PRNU/DSNU compensated scan line results from steps 410 and 411 and it is stored to memory or a buffer. In step 413 a determination is made to see if the end of the scanned image has been reached. If the scan of the target image is not complete, then the process is repeated beginning at step 409 for each of the remaining scan lines of the target image. If the scan is complete, the process ends at step 414.
Notice that for each scan line, the pixel-to-pixel offset and gain data is applied during the compensation steps. If the offset and gain data is stored in main memory, it must be read from memory once for each line that is scanned in. This operation consumes an enormous amount of memory bandwidth, which affects the overall performance of the device. In some embodiments, this step can be the performance bottleneck in all-in-one controller ASICs that are used to perform standalone copy operations.
An internal buffer can be utilized to reduce the memory bandwidth that the compensation operation consumes. However, traditional offset and gain data is substantial in size requiring a very large buffer. The size of this buffer often exceeds the size and cost constraints for a scanner controller ASIC.
Using traditional techniques, the offset and gain data that is stored to memory corresponds to one to two bytes of offset data plus one to two bytes of gain data per pixel for an entire scan line. Each pixel is comprised of three colors: red, green, and blue. For a 9″ 600-ppi scanner, this corresponds to (9 inches)*(600 pixels/inch)*(3 colors/pixel)*(1 to 2 bytes offset data +1 to 2 bytes gain data)=31 KB to 62 KB of compensation data. If the scanner data is truncated to 24-bit pixels before being stored to memory, the compensation data will be two to four times as large as the output scan line that is written to memory. Thus, reading in the compensation data from memory will consume significantly more memory bandwidth than writing out the scan line to memory, making PRNU/DSNU compensation the performance bottleneck.
In order to develop a method to reduce the amount of compensation data that must be stored to memory, actual scanner calibration data must first be analyzed. Sample calibration data for a 300-ppi scan is shown in
Notice that in the example of
In the preferred embodiment, the calibration data is compressed to store only the deviation from the previous value. The deviation is a signed number with a specified range. Given a starting point, each deviation packet will add or subtract from the previously computed value. As long as the deviation has enough range to cover more than the nominal pixel-to-pixel deviation, the resulting image will be visually equivalent since the calibration data will be nearly identical after it is computed.
In a preferred embodiment, if a value deviates greater than the range provided by the algorithm, then the deviation is maximized and the subsequent error is diffused to the next deviation. This allows an overflow error to occur on one calibration value, with no error for subsequent values. By diffusing the error, the resulting compensation will be visually equivalent.
As an example, consider the data provided in
Consider calibration data that has a nominal deviation that exceeds the range provided by the previous example, (−16 to +15). In a preferred embodiment, the deviation stored can be the same size but also contain a programmable shift that is set for the entire calibration set. To clarify, the deviation data stored would be the value shifted left by the amount programmed. If the shift were programmed to be one, then each deviation value would be multiplied by two (shifted left by one). This would provide a range of (−32 to +31) in increments of two. While the resolution of the deviation has decreased, the range has doubled. When calculating the resulting calibration data, target values that are non-multiples of two will have an error of one (e.g. a deviation of 11 is desired, but only a deviation of 10 or 12 is possible when shifting the value left by one). If the deviation value is shifted left by two, target values that are non-multiples of four will have an error of the value modulo four (value % 4). This error would then be diffused the same way an out-of-range error would be diffused to the next pixel. Again, the result will be visually equivalent.
In a preferred embodiment, a programmable deviation shift exists for all offset values and another programmable deviation shift exists for all gain values. This method addresses the observation that the average offset deviation range may be very different than the average gain deviation range, but the size of the deviation data stored would be the same for both.
If a group of pixels uses approximately the same offset and gain values, then those pixels may use the same offset and gain values for each PRNU and DSNU compensation and produce visually equivalent results. In a preferred embodiment, the resulting offset and gain value can be used multiple times as specified in a repeat-packet field stored in each calibration data packet. If an offset and a gain deviation value is 5 bits each, then for a RGB pixel, 30-bits of calibration data is stored. This data is considered to be part of the calibration data packet. A repeat-packet field will complete the calibration packet. In the hardware implementation, the repeat-packet field is two bits and specifies how many times the resulting offset and gain values will be repeated, zero to three times. This is equivalent to grouping the pixels together in groups of one to four during calibration.
In one embodiment, the numbers in the calibration transformation curve diagram of
Offset Calculation:
Pixel_With_Offset=Pixel_Uncorrected−(Offset_Value<<Constant)
With reference to the “left shift by constant” step previously described with respect to
Gain Calculation:
Pixel_With_Gain=Pixel_With_Offset*(Gain_Value+128)/128
In one embodiment, 128 may be used as the constant for calculating the 1.0 to 2.99 gain. An 8-bit (when it is uncompressed) gain value may also be used. So, the maximum gain is (255+128)/128=2.99. The pixel may be multiplied by 1.0 to 2.99 in order to stretch its value up to the white point.
Each pixel may contain an 8-bit offset and an 8-bit gain for each red, green, and blue component of the pixel. Uncompressed, this is 3*(8+8)=48-bits of calibration data per input RGB pixel.
Another example of captured calibration curves is given in
As an example, the offset and gain will be calculated for a pixel using the values in
Pixel[100].AfterOffset=Pixel[100].In−(21<<6)=4660−1344=3316.
Pixel[100].Calibrated=Pixel[100].AfterOffset*(72+128)/128=3316*1.5625=5181.
Notice that in
Keep in mind that this is how offset and gain calibration may be performed in one embodiment. The end goal is the same: get the white and black response normalized across the scanned line.
In order to utilize the methodology of the present invention in one embodiment, the flow in
With reference to
Beginning with step 801, the scan begins. In step 802, one or more lines of black data are scanned or one or more lines are scanned with the light off. In step 803, pixel-to-pixel offset data are calculated and stored to memory in step 804. In some cases, a calculation of this offset data is not required as previously explained with respect to this step 403 of
In step 811, a line of the target image is scanned, and in step 812, the compressed offset and gain values are read from memory are decompressed. In step 813 the decompressed pixel-to-pixel offset data is subtracted from the scanned line of the target image. In step 814, pixel-to-pixel multiplication using the decompressed gain data is performed on the scanned line of the target image. In step 815, a PRNU/DSNU compensated scan line results from steps 813 and 814 and is stored in memory or a buffer. The PRNU/DSNU compensated line can also be outputted to a computer or processor connected to the scanner. In step 816 a determination is made to see if the end of the scanned image has been reached. If the scan of the target image is not complete, then the process is repeated beginning at step 811 for each of the remaining lines of the target image. If the scan is complete, the process ends at step 817.
To quantify the amount of reduction in calibration data, consider a traditional offset and gain value of size one byte each for a total of two bytes of calibration data required per color plane per pixel. For a RGB color pixel, this results in six bytes or 48-bits of calibration data per pixel. As shown in a previous calculation, this will require 31 KB of calibration data per scan line. By storing only the deviation for the offset and gain, each offset and gain value can be reduced from 8-bits to 5-bits using the method discussed in this document. If each pixel has one 32-bit calibration packet associated with it (i.e. there are no calibration groups greater than one pixel), then for a 9″ 600-ppi scan line, 21 KB of calibration data would be input per line. This is a 33% reduction in data. If the pixels were on average grouped together to two pixels per calibration packet, then 10.5 KB of calibration data would be input per line. This is a 66% reduction in data. If the pixels were on average grouped together to three pixels per calibration packet, then 7.03 KB of calibration data would be input per line. This is a 78% reduction in data. If the pixels were on average grouped together to four pixels per calibration packet, then 5.27 KB of calibration data would be input per line. This is an 83% reduction in data.
Reductions of this magnitude relieve the memory bandwidth that was previously consumed by the PRNU and DSNU compensation step. It even makes possible the addition of a local buffer to store the calibration data for even greater performance within the cost constraints of a scanner controller ASIC. The calibration data is so small, it would be possible to store the calibration data on a host PC and send it through USB during scan, thus eliminating the requirement for a local memory to store the calibration data in a host based scanner.
Relieving this memory bottleneck improves performance significantly. In one embodiment, for the target ASIC performance model using an average grouping of four pixels per calibration packet, the amount of time to complete a copy operation in high quality copy mode (600×600 ppi scan, 1200×1200 dpi print, 100% coverage) dropped from 23.04 seconds to 21.65 seconds. For normal quality copy mode (300×600 ppi scan, 600×600-dpi print, 40% coverage), the amount of time to complete the copy operation dropped from 9.15 seconds to 8.46 seconds. Even though the entire copy operation is comprised of several modules, all of which require memory bandwidth, relieving the PRNU/DSNU compensation bandwidth requirement has a significant impact on the overall system performance. Of course, it will be understood that the decreases in time to perform the copy operations described above are examples only. The actual decrease in time, and therefore increase in performance, achievable by using the teachings of the present invention will be based upon a number of factors as would be known to those of skill in the art.
An even more important advantage of this method is the impact on the overall memory bandwidth of an all-in-one controller ASIC. If the direct memory access (DMA) block is accessing memory more than 20% of the time while printing, the embedded processor may be unable to read the instructions from memory in time to properly service interrupts that are critical to the system. Using traditional methods, the PRNU/DSNU compensation step may consume so much memory bandwidth that the scan and print speed must be slowed down in order to operate the device correctly. This will have an even greater impact on copy time. Using the method described here, the PRNU/DSNU compensation step has significantly less impact on the overall memory bandwidth consumed. This can make the difference in printing at 30 inches per second (ips) rather than printing at 25 ips. Using the traditional method, six bytes per pixel, the calibration DMA channel alone consumes 1.5% of the 20% budget (a single channel among approximately 40 channels). Using the compressed method with four bytes per pixel, the bandwidth consumed is 1.0514%. At two bytes per pixel, it consumes 0.5257%. At one byte per pixel, it is 0.2629%, an 82% reduction from the traditional method.
The embodiments described above are given as illustrative examples only. It will be readily appreciated by those skilled in the art that many deviations may be made from the specific embodiments disclosed in this specification without departing from the scope of the invention.