This invention relates generally to lenticular images and more specifically to methods of halftoning continuous tone images for lenticular applications.
Generally speaking, the process of creating lenticular images is known in the art. The term “lenticular imaging” refers to the art of interleaving images behind an array of lenses such that a viewer views different images as the viewer's angle of perception changes relative to the lenses. Typically, lenticular arrays employ several lenses arranged as columns across a set of interleaved or interlaced (also called “spatially multiplexed”) images; however, several new array configurations are known allowing a wider variety of viewing possibilities. Software applications are known that can interlace various continuous tone images into a single continuous tone lenticular image to be placed behind a lens array.
Color digital images are typically made up of a grid of pixels. These pixels have a wide range of red, green, and blue light varying from black to full brightness. The intensity or brightness of any given color is referred to as a gray scale for the color. The gray scale ranges in value from zero to one hundred, also referred to as zero to one hundred percent. For simplicity, this application will refer only to gray scale with the understanding that any single color may be reproduced using the discussed methods. Digital images of this sort are often referred to as “continuous tone” images. While this way of representing images on a computer display or television works quite well, it does not work for printed images because there is no practical way to print ink at varying levels of intensity. Unlike on-screen pixels that each can have a wide range of intensity, individual printed dots can not vary in brightness. In other words, any given spot on a printed image is either a full spot of ink or blank paper. Therefore, to make printed images fool the naked eye into seeing shades of gray and smooth tonal gradations, the continuous tone image must be processed into a form that will allow this. This process is known as halftoning or screening. The halftoning process involves the conversion of large pixels that each have varying shades of gray (from a continuous tone image) into much smaller spots that can have only back or white values (the halftoned image). When an image is halftoned each continuous tone pixel (capable of 256 levels of brightness or tone) is broken down into a pattern of single-brightness dots of ink. To account for the varying levels of brightness in the original image these patterns of ink dots, or screens, vary in either size or placement.
The practice of creating three-dimensional and animated images through the process of printing onto lenticular material can be quite complex in both the pre-press and on-press arenas. This practice is commonly plagued with technical difficulties, and the end results are often unsatisfactory. There is also a trend in the industry to move towards thinner and finer lenticular materials. This is driven by a desire to reduce overall manufacturing costs while at the same time increasing the potential applications of lenticular products. The challenge here is that as lenticular materials become thinner, there is also geometric growth in the technical difficulties inherent in the current state of reproduction methods.
Over the past several years various commercial lenticular software products have become available, and these programs all tend to address the creation of interlaced lenticular files. Traditionally, once interlaced images are created, they are brought into a pre-press environment where they are treated in the same manner as standard, non-interlaced files. While existing pre-press workflow and halftoning methods work very well with traditional, continuous-tone images, they introduce a host of problems and unnecessary complexities into the discipline of lenticular printing. These problems present themselves both in workflow convolution as well as visually in printed lenticular work in the form of moire, banding, checkerboard patterning, ghosting, and blurry 3D images.
These visual problems have been addressed in several different ways with only limited levels of success. Most often what is attempted is to increase the fineness of the halftoning and printing processes, utilizing higher line screens and finer printing dots. An alternate method proposes the idea of half toning each component image separately prior to interlacing. While these methods often can result in better quality printing plates these ultra-fine dot plates introduce a whole new set of problems relating to putting unrealistic expectations on a printing press and its ability to reproduce such fine dot structures.
The above needs are at least partially met through provision of the method of producing improved lenticular images described in the following detailed description, particularly when studied in conjunction with the drawings, wherein:
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and/or relative positioning of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention. It will further be appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the arts will understand that such specificity with respect to sequence is not actually required. It will also be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein.
Described herein is a process that dramatically improves the quality of spatially multiplexed images (hereafter referred to as interlaced images) for use in conjunction with lens arrays such as lenticular, fly's eye, square, hexagonal, triangular, and diamond packed configurations (hereafter referred to as “lenticular”). Before describing this process in more detail, however, it may be helpful to first briefly describe certain current practices. For example, take a project where the intent is to produce a lenticular effect, using a cylindrical lens array, where three different images ‘flip’ from one image to the next and to the next as the viewer turns the lenticular print in their hand.
Given the images in
Assuming the array is viewed with the lens axes vertically arranged, the screen forms an auto-stereoscopic (meaning an image where an illusion of depth is created without the use of glasses or other viewing device) array where each eye sees a unique image thus creating the illusion of depth. As such, the component images are typically of a 3-D scene viewed from a revolving angle. In other cases dissimilar images are used to create a flip or animation effect.
Generally speaking, pursuant to these various embodiments, an image comprising a plurality of interlaced images is provided, and the image is halftone processed according to one or more processes. The image is typically halftone processed at least in part according a predetermined function depending at least in part on a gray scale level for a given pixel and on gray scale levels for local pixels nearby the given pixel. The predetermined function can operate on a continuous tone version of the image or on a printed-dot model of the image. Alternatively, the predetermined function may include a predetermined error filter where halftoning error is distributed to pixels corresponding to the same interlaced image from which the error accumulates. The error may be capped at certain levels to avoid error build up. Also, the image may be mapped such that pixels from a given interlaced image are correlated with other pixels from the same interlaced image. Further, the image may be halftone processed according to a variety of printed dot models or dither arrays. Additionally, the image may be post-processed to arrange dots and/or shift columns of pixels to minimize overlap error. The image may also be modified to include extra pixels to align the interlaced images under the lenses.
Referring now to the drawings, and in particular to
One method of halftoning the interlaced image is by applying error diffusion. In one such embodiment, the step of halftone processing the image is performed at least in part according to a predetermined function depending at least in part on diffusing error from a pixel corresponding to a first interlaced image to other pixels corresponding to the first interlaced image. When the predetermined function operates on a continuous-tone version of the image, error is defined as the difference between the brightness of a pixel in the halftoned image, either 0 for black (a dot is printed on the pixel) or 100 (or 1) for white (a dot is not printed on the pixel), and the brightness, which varies between 0 and 100, of the corresponding pixel in the continuous tone image from which the halftoned image is derived. Alternatively, the predetermined function may operated on a printed-dot model of the image where in the error is defined as the difference between the brightness of a pixel in the modeled image, which varies between 0 and 100, and the brightness of the continuous tone image. In either case, by distributing the error among pixels surrounding the pixel which is being processed, the brightness for a group of pixels in the halftoned image is approximately the same as the brightness of the same pixels in the continuous tone image.
The predetermined function may include a predetermined error filter that can be represented mathematically and stored in a computer memory where x[n] is the continuous tone image and y[n] is the halftoned image. Therefore, y[n] can be described as:
where xe[n] is the diffused quantization error accumulated during previous iterations of calculating the printed status of certain pixels. Thus, the error xe[n] can be represented by the equation:
with ye[n]=y[n]−(x[n]+xe[n]). The diffusion coefficients bi, which regulate how the error at pixel n transfers or diffuses into neighboring pixels, are such that the sum of all bi is 1. Computationally, error-diffusion can be done in-place with the output pixels, y[n], residing in the memory locations of the input pixels, x[n].
Noting that the printed dots of an ink-jet or similar printer can be accurately modeled as a binary, round, circular-dot such that an isolated black pixel is completely covered and portions of neighboring white pixels are partially covered, one skilled in the art will recognize that the binary halftones printed by error diffusion will always print darker than their ratio of printed to not-printed dots. As such, images will typically be tone-corrected prior to halftoning to compensate for this overlap. Alternatively, model-based error-diffusion can account for dot overlap in the halftoning process where a model of the printed dot is used to predict the gray-level of each halftone pixel after printing and then using this modeled gray-level in the calculation of the corresponding quantization error.
In model-based error-diffusion, the output pixel, y[n], is still determined as defined above, but in this case, the error terms, ye[n−i] for i=1, 2, . . . , M, are calculated at each iteration and typically are not stored in an error image buffer. That is, assuming an ideal printer means that the quantization error, ye[n], can be diffused and stored in an error buffer, e[n], such that:
With reference to
where y′[n−i] is the modeled tone for output pixel y[n−i] assuming y[n+i] for i=1, 2, . . . are not printed as seen in the pixels 640, 641, and 642. From the above equations, model-based error-diffusion can be summarized as:
Various methods are known to predict and accurately model the resulting gray-levels that will be produced by a given printer for a given dot pattern. Such models can be specified by formulas, for example using the hard-circular dot model, or by table look-up where the table is generated by analysis of printed test patterns from the target device.
Although error-diffusion techniques are known to maximize a device's apparent resolution, the resulting halftone images are especially susceptible to distortion caused by print artifacts such as dot-gain and dot-loss due to their minimization of the perimeter-to-area ratio of printed dots. But more disconcerting for lenticular screening is the fact that as a neighborhood process, error-diffusion assumes correlation between neighboring pixels. In the case of a step-edge, error-diffusion will diffuse error across the edge to create a blurred edge in the halftone image. If the image is composed of multiple images spatially multiplexed together, with reference to
Alternatively and with reference to
In a further alternative, the quantization error xe[m, n] of the currently processed pixel is not diffused into neighboring pixels; instead, the quantization error xe[m−1, n−1] of the top-left corner pixel 1010 is diffused into neighboring pixels 1020, 1030, 1040, and 1050 of the same interlaced image as the error diffused pixel 1010. In this embodiment, the computer may delete all memory of past input pixels preceding x[m−1, n−1]. Furthermore, this particular approach can be applied to problems where the sampling grid of the printer is not the same as the sampling grid of the continuous-tone, lenticular image. This particular situation is depicted in
In certain embodiments, it is advantageous to perform the step of mapping the image such that an index corresponding to each pixel of a halftone processed version of a given interlaced image associates that pixel to other pixels of the halftone processed version of the given interlaced image. In such an embodiment, the index includes an indication of where each pixel of the halftone processed version of the given interlaced image is located relative to the other pixels of the halftone processed version of the given interlaced image. The pixels are typically indexed in memory such that the pixels are identifiable with their location indicator. Using such a mapping technique, minimum squared-error quantization error diffusion can be extended to arbitrarily arranged packed-lens arrays where error is diffused to the unprocessed pixels within a fixed sized neighborhood of the currently processed pixel having the same identification number/tag in the map image.
Alternatively, the mapping of the image can associate a depth indication to each pixel. The depth indication can include an indication of what portion of the image a given pixel occupies. For example, the depth indicator may indicate that a given pixel is located in the foreground portion of the image or the background portion of the image. Thus, an error filter may be applied according to the portion of the image from which the pixel originates to more efficiently distribute error to similar portions of the image.
For areas of the image where component images have similar gray-levels, the performance of model-based error-diffusion for lenticular images is consistent with traditional halftoning applications, but model-based error diffusion's performance is often poor in areas where the component images differ greatly in their gray-levels. For instance, the overlap of dots from dark-gray level slices into light gray-level slices limits the output gamut (in other words the range of color or brightness available) in the light-gray slices such that unregulated error builds up uncontrollably. The suppression of dots, caused by the unregulated build-up of error, is generally referred to as an instability, and it is known to clip the error build up across step-edges to reduce the amount of bleeding of the error across such discontinuities in gray-level.
More specifically, during halftoning in certain embodiments, the accumulated error is compared with the currently calculated pixel's brightness value, which after crossing a step-edge would jump in value. This jump in value would, likewise, create a jump in the amount of the accumulated error value, xe[n], relative to the input pixel brightness level, x[n], and trigger a clipping operation defined by the predetermined error threshold, T. Such clipping of the error beyond a such an error threshold (T=1.0, for example) may reduce some of the suppression of black, but it typically also exacerbates the problem of ghosting where the dark regions of one component image show up in neighboring component images. Thus, in certain embodiments, the predetermined function clips excess error from local pixels at a predetermined error threshold. Next, error amounts beyond the predetermined error threshold are diffused into nearby pixels, typically the nearest pixels of the neighboring component images. By doing so, we increase the gray-levels of the otherwise dark regions responsible for the instability and, thereby, alleviate the instability.
A further alternative approach to lenticular halftoning includes halftone processing the image at least in part by adjusting a gray scale level of a given pixel according to a predetermined function at least in part of a gray scale level of nearby pixels and probability values obtained from a lookup table (“LUT”). A typical embodiment involves applying traditional tone correction to the image of channel A and then adjusting the gray-levels of image B to account for dot-overlap from A. The gray-levels of channel C are then adjusted to account for overlap from both A and B as are all remaining channels to account for dot-overlap from the already processed channels. Once all the channels have been modified, the image of channel A is then updated to account for dot-overlap from its neighboring channels, which were ignored during the first iteration. With the update to channel A, all the remaining channels are then reprocessed to account for any changes, and this continues until the gray-levels of every channel finally converge. In this process, the predetermined function may brighten the gray scale of a given pixel to account for the dot overlap from nearby pixels. Similarly, the predetermined function typically brightens the gray scale of the pixels nearby the given pixel to account for the dot overlap from those nearby pixels.
In the case of the first channel, typically, a length-256 LUT is constructed where the ith index is determined by halftoning a constant gray-level image with intensity (i−1)/255 and then inserting two, unprinted columns between every column of the pattern. The resulting pattern is then printed on the target device with the resulting print scanned in to a computer. The LUT entry is then set to the average gray-level of the printed columns corresponding to the printed pattern prior to up-sampling. In so doing, a tone correction curve (“TCC”) is built that ignores the impact of left and right dot overlap. Thus, from the original, spliced, continuous-tone image, an intermediate image is constructed where pixels corresponding to channel A and with gray-level gA are set to the index value, gA(1), of the LUT with entry gA.
Next, to adjust the pixels of the uncorrelated image B according to the predetermined function, one skilled in the art will understand that the probability that a given pixel of a binary, error-diffused dither pattern representing gray-level g is white (1) is equal to g. Extending this relationship to the printed page implies that the probability that a given point, y, is not covered by ink is equal to g′ where g′ is the measured gray-level of the printed pattern representing intensity g. An example where g′ will be different than g is in the case of the round, circular-dot model where the printed dot overlaps neighboring pixels.
For lenticular printing, the probability that a given point is covered by dots from its own channel and the probability that a given point is covered by dots of its neighboring channel are both of interest. The LUT used for processing image A is used for dots of the same channel. “LUTc” will indicate this first LUT as where c stands for center. A second table is built for dots of the neighboring channel for measuring the likelihood of dot overlap by simply building a second table labeled LUTr, where r stands for right and the table entry is the average gray-level of the pixel columns directly to the right of those used previously for LUTc.
Given the two LUTs, the predetermined function can determine the probability that a given point, y corresponding to a pixel of image B, is not covered by ink is equal to the probability that y is not covered by a printed dot from image B and not from image A. Because these two events are uncorrelated, the probability that y is not covered by a dot from either channel is:
g
B=LUTr(gA(1))×LUTc(gB(1)),
where gB is the desired gray-level of printed pixels in image B while gA(1) and gB(1) are the gray levels of the tone corrected images prior to halftoning. Because gA(1) was determined previously when we processed image A, LUTc is searched for the index with entry value gB(1), setting the pixels of the intermediate image accordingly. For modifying image C, we repeat the process used for image B where gB(1) replaces gA(1) and gC(1) replaces gB(1). This process repeats until the last image (in our case, E) is to be processed, which is neighbored on both sides by already modified pixels.
For the last channel, it is assumed that printed dots are sufficiently small where dots of the left neighbor do not overlap dots of the right. Thus, in a favored embodiment, the predetermined function will divide the pixels of image E into two halves such that the above equation can be used to define the printed gray-level of each half of each pixel relative to its corresponding neighbor. The printed gray-level of the whole pixel is then defined according to:
where LUT1 is the table built by measuring the average gray-level for columns directly to the left of the error-diffused columns of our printed test patterns. Assuming symmetric dots, LUTr can be used in place of LUT1. Using this new equation, we can now update images A through E numerous times such that:
g
A=½LUTc(gA(i))×[LUTr(gE(i−1))+LUTr(gB(i−1))],
where gA(i) represents the new gray-level for pixels of image A after the ith iteration.
For programming purposes, the predetermined function can be simplified by setting gA(1) through gE(1) equal to their original gray-levels gA through gB and then initiating the process at gA(2) thereby ignoring the first iteration. Although this process is described herein according to processing individual pixels on a pixel by pixel basis along a traditional left-to-right and then top-to-bottom raster scan, the algorithm should be not limited to processing all the pixels of one channel prior to addressing pixels of another. Furthermore, the algorithm allows channels to be processed in any order—not specifically with the left-most channel first. In other words, the pixels of the input image can be processed in any arbitrary order and not necessarily all at once or only once during a particular iteration.
One skilled in the art will understand that, although it is assumed that printed dots are sufficiently small such that dots from the left neighboring column do not overlap printed dots from the right neighboring column, when this is not the case the above described technique can be extended by increasing the number of LUTs needed to derive the expected gray-level of a given column. For instance, the given pixel typically is divided into three regions: (1) left region only overlapped by printed dots from the left neighbor, (2) center region overlapped by printed dots from both sides, and (3) right region only overlapped by printed dots from the right neighbor. As such, three LUTs are required with one LUT for each region of the pixel where the middle LUT is a two dimensional LUT indexed by the left and right side gray-levels.
One skilled in the art will also understand that, that the above discussion assumes that the formation of dots on the page follows an overlap model where a particular dot's size and shape is independent of the printed condition of its neighbors. For certain printers, such as an ink-jet printer, this assumption fails because tight clustering of printed dots increases the corresponding drying time. The increased drying time allows printed ink droplets to diffuse into and across the paper for a longer period of time, resulting in greater dot gain. For such embodiments, the above iterative tone correction process is modified to rely directly on table look-up to determine the best tone-corrected gray-level gA(i) based upon the neighboring levels gE(i−1) and gB(i−1).
More specifically, individual columns of binary pixels are selected from three separate printed patterns representing various gray-levels that are then spliced together as left, center, and right neighbors. The resulting three-column image is printed on the target device, scanned, and analyzed to measure the resulting average gray-level of the center column. This particular value along with those from all possible combinations of pixels will be used to build a three dimensional table. This table is indexed in the first dimension by the gray-level of the left-side neighbor gA(i) and, in the second dimension, by the gray-level of the right-side neighbor gC(i−i). From that point, identifying the optimal gB(i) amounts to searching through the LUT looking for the output tone nearest the target gray-level gB.
To reduce the need for multiple iterations to reach convergence of the tone correction, alternatively, the predetermined function alternatively may use the gray-levels of the neighboring pixels directly to the left and right but of the previously processed row instead of the neighboring pixels directly to the left and right of the subject pixel. This process assumes that, at least, the first row was iteratively processed as described above for those rows to be properly tone corrected. In another alternative, the function may assume that pixels above the first row are white and that the resulting error quickly diminishes with each newly processed row. As a single-pass tone correction procedure, another alternative function would build a new look-up table based upon the previous LUT that is indexed by gB instead of by gB(i) such that the table need not be searched.
Viewing certain error diffused images created using the above LUT methods through common lenticular lenses creates a visual effect equivalent to down-sampling along the axis perpendicular to the alignment of the lenticular lens—creating aliasing artifacts in the pattern that result in low-frequency artifacts. These artifacts are then the source of visually disturbing textures that degrade the visual fidelity of the printed image.
In response to the down-sampling illusion created by the lenticular lenses, in certain alternative embodiments, it is advantageous to limit the width of the image slices to be only a single pixel wide after up-sampling to the printer's native resolution. If there are an insufficient number of component images by which to span the width of the lenses, one may perform the step of inserting pixels prior to halftone processing the image to allow for an approximately equal number of pixels from each interlaced image to correspond to a given lens may be performed to reduce artifacts created when the interlaced images do not correspond evenly with the lenses. One skilled in the art will recognize that this step may be performed on any lenticular image and that any number of pixels may be inserted to create additional columns or portions of images. New component images may be created by simply duplicating the existing images. Alternatively, it is possible to create these transition images by interpolating between existing, neighboring, component images. For example, gray levels for the inserted pixels may be derived from gray levels for pixels nearby the inserted pixels. Alternatively, the gray levels for the inserted pixels may be derived from the gray levels for pixels corresponding to an interlaced image corresponding to the inserted pixels. In addition, the inserted pixels may be removed after halftoning the image and prior to printing the image, thereby, generating an appropriately sized, binary image.
Following the procedure of creating new component images, the assumption of statistically independent columns is accurate; however, the assumption fails when using a traditional splicing technique because a given pixel may be correlated to its neighbors. The multiplicative relationship of the above described equations as used in the predetermined function is then no longer valid. In this alternative, the predetermined function applies an additive relationship defined by:
gB=LUTr,s(gA(1))+LUTc(gB(1)),
is applied where LUTr,a is a new look-up table representing the amount of dot coverage falling into the right-side neighboring columns of a printed dither pattern without up-sampling. Assuming symmetry, the same look-up table is used by the predetermined function for calculating overlap from both the right and left sides. Although the subscript E was previously used to indicate the gray-level was from the uncorrelated image E, here the subscript E uses it to signify the gray-level of left-side neighbor from the same image A just as the subscript B signifies the gray-level of the right-side neighbor also from image A. To extend the iterative tone correction to arbitrary lens arrays, the relationships of the above equations are used by the predetermined function for any and all pixels surrounding the currently processed pixel x[n] that may overlap x[n] if printed.
By using the iterative tone correction technique either as a multi or single pass procedure, the worst case errors occur when light gray pixels are surrounded by dark pixels—a phenomenon sometimes referred to as ghosting. In another alternative predetermined function, such instabilities can be detected using the above equation, rewritten as:
where sufficiently small gE(i−1) and gB(i−1) with sufficiently large gA leads to a ratio larger than 1.0, which is outside the range of LUTc. As described above in connection with model-based error-diffusion, the excess intensity can be diffused into the neighboring, dark pixels to reduce the number of printed dots in these neighboring channels that overlap into the current channel.
More specifically, here error is defined as the excess intensity above a user-defined error threshold T where T is greater than 1. By using a separate threshold as opposed to the maximum reproducible intensity of 1, the drastic step of dampening neighboring gray-levels is limited to only particularly bad instances of instabilities. Assuming the threshold is crossed, typically, the error is distributed proportionally between the two sides of the subject column, especially in cases where the subject column is overlapped from only one side. To do so, the two halves of the subject pixel are analyzed separately, defining right er and left el error terms as:
Each half is then retested such that if el>>T, then the gray-level corresponding to the left-side neighbor is modified as:
g′
E
=g
E
+αe
l,
or if er>T, then the gray-level corresponding to the right-side neighbor is modified as:
g′
B
=g
B
+αe
r,
where α is a second user-defined parameter that controls the rate at which light gray-levels are “burned” into neighboring dark levels. Given the iterative nature of the algorithm, even small values of α can have a significant effect, good and bad, depending on the number of iterations.
To simplify the conversion of the original, continuous-tone, lenticular image into a halftone image for printing, the predetermined function can incorporate the single-pass iterative tone correction procedure directly into error-diffusion. In particular, the quantization can be implemented as above except where xe[n] is the diffused quantization error accumulated during previous iterations as:
where ITC(x[n−i−1], x[n−i], x[n−i+1]) is the iterative tone correction procedure that outputs the gray-level gA as the target gray-level for the pixel x[n−i] such that, after printing, the corresponding printed pixel, y′[n−i], has the desired amount of ink coverage specified by x[n−i]. The terms x[n−i−1] and x[n−i+1] are not the pixels directly to the left and right of x[n] but the pixels directly to the left and right of the previously processed row, as specified for single-pass iterative tone correction.
Other alternative embodiments address the down-sampling visual artifacts discussed above that occur in certain lenticular images. For example and with reference to
For frequency modulated (“FM”) halftoning by means of error-diffusion, there are several means by which increased horizontal correlation between printed dots can be achieved. One such embodiment includes halftone processing the image at least in part according to a predetermined function depending at least in part on a threshold variable for a given pixel that is responsive at least in part to a printed status of nearby pixels of the image. Such threshold modulation, typically includes modulating the quantization threshold to increase or decrease the likelihood of printing a dot. For example, intermediate halftone image, hT[n], is derived by x[n], using a traditional halftoning technique such as an AM line screen. From hT[n], we then modulate the threshold of our iterative-tone correction error-diffusion technique such that:
where α is a tuning parameter controlling how much influence we want hT[n] to have on the placement of dots in y[n]. An alternative means by which added directional correlation can be achieved is through error modulation whereby the output pixel is derived according to:
where it is now the error term xe[n] that is being manipulated by the intermediate halftone hT[n].
Another alternative includes adding directional correlation through threshold modulation whereby the output pixel is derived according to:
where x[n±i] represents the intensity value of a pixel within a local neighborhood of x[n], specifically from the neighboring component image. For the above threshold modulation methods, one will recognize that the printed status of the nearby pixels may be determined at least in part on a lookup table of printed status probabilities or on a predetermined non-linear function of the gray scale levels of nearby pixels.
One skilled in the art will recognize that the use of horizontal correlation in a given human visual system model assumes a lenticular lens array where the lens axes are arranged vertically. An important feature is that the distribution of printed dots attempts to counter the effects of the apparent, asymmetric sampling grid of the printer as seen through the lens array. Thus, in the case where the lens axes are arranged horizontally, the human visual system model would include a vertical correlation between printed dots.
For an arbitrary lens array, another alternative for the predetermined function depending on a human visual system model includes an error diffusion process dependant at least in part on the human visual system model. Such an error diffusion process may modify the error filter weights, signified by bi, such that the resulting distribution of pixels counters the asymmetric sampling grid to create a visually pleasing halftone when seen through the lens array. Typically, the resulting error filter will only minimize low-frequency graininess across a small range of gray-levels. It is known that certain error filters will perform better for a given graininess or frequency within an image. Therefore, the image may be halftone processed according to the frequency content in the image. One such embodiment may include multiple error filters for all ranges of gray-level such that the specific error filter used to distribute the error for a pixel is determined according to the continuous-tone gray-level of the pixel. For example, one may use stochastic halftoning for areas of the image with a high frequency content and use period halftoning for areas of the image with a low frequency content. Typically, stochastic halftoning is used for gray levels of 0% to about 29% and about 71% to 100% whereas period halftoning is used for mid-gray levels of about 30% to about 70%. Another embodiment may include assigning for each pixel a value corresponding to a variation in gray scale among nearby pixels.
In such an embodiment, a library of error filters is stored in memory. Such an approach typically also requires optimizing the error filter weights at each gray-level by generating a spatial or spectral cost function assessed on the resulting dither pattern created by a particular error filter. This error filter is then modified in some manner such that the next iteration of error filter has a lower cost function than the previous. Repeating this process for many iterations typically would then converge on a final error filter stored in memory and used when halftoning the corresponding gray-level.
An alternative technique for measuring the visual cost of a particular dither pattern is to generate a human visual model that models the visibility of a given pattern as a radially symmetric low-pass filter. While diagonal correlation has been used to modulate such a human visual system model along the diagonals of the power spectrum, the total amount of modulation was small. For arbitrary lens arrays, the human visual system model addresses the asymmetry in the apparent sampling grid, placing printed dots closer along some axes and farther apart along others. The specific shape of the filter will depend on the distribution of lenses.
A further alternative embodiment for halftone processing the image includes the step of post-processing the image by changing a printed status of at least one pixel to increase the likelihood of printing on adjacent pixels. This process is also called a direct binary search wherein the halftoned image is reviewed and the printed status of a given pixel is changed based upon the printed status of nearby pixels. For example, having an human visual system model for asymmetric grids, the pixels of the halftone image are processed iteratively where during a particular iteration, a printed dot is either swapped with a neighbor, toggled from on to off or off to on, or left unchanged depending upon which transformation leads to a lower visual cost or reduced artifacts between the current halftone image and the original, continuous-tone image.
This process is illustrated in
The problem for direct binary or pixel search for lenticular/lens array images is that the associated human visual system model assumes a monocular view of the halftone. Thus, a better pixel search embodiment includes a vision model that processes printed dots according to a cost function for both a monocular as well as a binocular component such that printed dots from the left eye image are matched with printed dots from the right eye image. More specifically, printed dots from two component images, which are closely spaced when viewed through the lens array, may appear to be floating in space due to the stereoscopic effect of lens array images. As such, the depth at which the points appear to be positioned may not be consistent with the depth plane of the image content, and as the points get farther and farther away from the depth plane of intended image content. The pixel post-processing step can address this effect by taking into account the shape of the lens.
A further embodiment of the process using the human visual model may include an output dependent feedback mechanism. In such an embodiment wherein the error filter using the human visual system model measures error in terms of both the monocular halftone texture as well as the binocular, the stereoscopic texture can be optimized by modifying the weights of error diffusion with output dependent feedback where the output pixel y[n] is defined as:
where
is the weighted sum of previous output pixels. Setting the tuning parameter h at a high level has the effect of increasing the likelihood that y[n] results in a printed dot while setting h at a low level has the effect of decreasing the likelihood. Properly tuned, the above process can produce visually pleasing lens array halftones that minimize low-frequency graininess in the component images, when viewed independently, as well as when two component images are viewed stereoscopically.
Dither array halftoning, as known in the art and with reference to
With regards to lenticular halftoning, proper halftoning of the spatially multiplexed, continuous tone image should be performed such that intersection of the dither array with the pixels of a particular component image leads to a uniform distribution of all possible threshold levels from the minimum to the maximum intensity level. This uniform distribution should also be done in as small a local neighborhood as possible. Specifically in traditional halftoning, a 16×16 dither array should have, for an 8-bit per pixel grayscale image, 256 unique thresholds ranging from 0 to 255. As such, an image of constant gray-level g should be printed such that, for every 16×16 window, the ratio of the number of printed pixels to the total number of pixels inside the window is equal to g. Those skilled in the art will recognize the needed modifications to account for printer variability.
Thus, applying that same 16×16 dither array to a lenticular image where each slice of each component image was composed of two consecutive image columns, then only those thresholds within two consecutive columns would be applied to a particular channel. Assuming that the thresholds within the 16×16 were labeled Ti for i=1, 2, . . . , 256, then threshold T1 would be applied to one channel but not to either the left or right side neighboring channel. This process is demonstrated in
Thus, another alternative for halftoning the image includes halftone processing the image according to a plurality of interlaced dither arrays. For example, assuming that N interlaced images comprise the single, continuous-tone, arbitrary lens array image, a series of techniques are available by which N separate dither arrays are interlaced or spatially multiplexed in a fashion corresponding to the interlacing or multiplexing used to create the lens array image to be halftoned. In such an embodiment, the resulting dither array can be tiled end-to-end where thresholds of the nth dither array are in alignment with the pixels of the nth component image. Alternatively, each of the N dither arrays may be of different sizes, and as such, we may tile each dither array end-to-end in order to create N separate images, each the same size as their corresponding component image prior to spatial multiplexing, and then spatially multiplexing these super dither arrays into a single dither array the same size as the original, continuous-tone, arbitrary lens array image.
As one approach to this lens array halftoning technique, N component dither arrays are generated from a single dither array by simple replication. Alternatively, one can transform one of the component dither arrays after replication by means of a rotation, flip, inversion, circular shift, or any combination there of. This is an especially advantageous approach when using a pseudo-random dither array as one of these transformations will lead to an uncorrelated appearance between component images while being able to store only the original dither array in memory. A specific example of this latter approach is to use a traditional AM line screen for the odd numbered channels, and the same dither array after a horizontal flip. As another approach to dither array halftoning, the N component dither arrays can be derived as N independently generated dither arrays. Finally, some combination of independently and replicated dither arrays is also possible.
With reference to
The best case covering efficiency for rectangular grids 1730 occurs for square grids, whereas hexagonal grids 1710 outperform rectangular grids 1730 over a wide range of aspect ratios, in some cases by over an order of magnitude. This allows for resolution to be increased asymmetrically yet still enjoy superior radial symmetry of pixel coverage. It is very often easier to increase resolution in only one dimension, and using hexagonal grids 1710 provide such an opportunity. In the case of model-based error-diffusion, hexagonal grids 1710 have an advantage in that a given pixel has only six directly neighboring pixels instead of eight; furthermore, the model of overlap for a hard-circular dot is symmetric for all neighboring pixels. As such, non-traditional sampling grids such as the hexagonal sampling grid 1710 may provide significant advantages over rectangular grids 1730 for lenticular printing. Hexagonal grids 1710 are also the preferred sampling technique for non-lenticular lens arrays such as, in particular, hexagonal lens arrays.
Given the super-high dot addressability of modern digital printers, the implementation of hexagonal grid halftoning is reasonable for, at least, a hand-full of devices (eletrophotographic printers, in particular), and given the general advantages to using hexagonal sampling grids. By studying the application of blue and green-noise to images sampled along hexagonal grids, it has been determined that at a specific degree of coarseness, hexagonal sampling grids are the preferred sampling technique for stochastic dithering.
In describing the various approaches to halftoning, a third class of techniques is referred to as AM-FM hybrids because they combine the concepts of varying dot size and dot frequency with variations in gray-level. Extending these concepts to lenticular printing, the image may be halftone processed according to a frequency content in the image. For example, variations in the manner in which print dots are distributed based upon the change in image content between component images in the continuous-tone, lenticular image may be considered a shift in the frequency of dot placement within the image. More specifically, the statistical independence between printed dots from neighboring component images is maintained when the variation in tone, in other words the frequency content, between those component images is high while strongly correlating dot placement in regions where tonal variations between component images are small. Such an embodiment may be considered an alternative to the library of error filters dependent on the graininess of the image as discussed above.
In one alternative approach, the N component images are assumed to be separate views of a three dimensional scene such that the statistical correlation in color between neighboring pixels of the interlaced image varies according to the disparity in depth between the objects within the fields of view of the two pixels. As such, a depth image can be maintained such that a pixel of this image, d[n], stores the apparent depth coordinate of the image content stored in pixel x[n] of the continuous tone, lenticular image as discussed above. Then the depth value between two neighboring pixels may be used as a means for manipulating the correlation between the printed dot status of the two corresponding pixels of the lens array halftone. As an example, the predetermined function may use the inverted, normalized difference in depth, 1−|d[n]−[n−1]|/(dmax−dmin), as the value of α in the above equation regarding threshold modulation. Alternatively, a difference image may be generated from the depth image that can be run through a low-pass filter, using the filtered output in place of |d[n]−d[n−1]| for defining α.
Instead of using depth as a measure of statistical dependence between neighboring pixels of the continuous-tone, lens array image, alternatively, the gray-levels of the pixels may be used by means of a difference image with pixel x[n] being equal to |x[n]−x[n−1]|. As such, the difference image may be run through a low-pass filter such that the energy from x[n] is spread into the local, surrounding neighborhood. The resulting low-pass filtered image is then utilized in the threshold modulation equations.
Those skilled in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the spirit and scope of the invention, and that such modifications, alterations, and combinations are to be viewed as being within the ambit of the inventive concept.
This application claims the benefit of the filing date of U.S. Provisional Application No. 60/616001, filed Oct. 5, 2004, which is hereby incorporated in its entirety herein.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2005/035603 | 10/5/2005 | WO | 00 | 8/26/2009 |
Number | Date | Country | |
---|---|---|---|
60616001 | Oct 2004 | US |