Method and apparatus for rendering color images

Information

  • Patent Grant
  • 11721296
  • Patent Number
    11,721,296
  • Date Filed
    Tuesday, November 2, 2021
    2 years ago
  • Date Issued
    Tuesday, August 8, 2023
    9 months ago
Abstract
There are provided methods for driving an electro-optic display A method for driving an electro-optic display having a plurality of display pixels, the method comprises receiving an input image, processing the input image to create color separation cumulate, and using a threshold array to process the color separation cumulate to generate colors for the electro-optic display.
Description
SUBJECT OF THE INVENTION

This invention relates to methods for driving electro-optic displays. More specifically, this invention relates to driving methods for dithering and rendering images on electrophoretic displays.


BACKGROUND

This invention relates to a method and apparatus for rendering color images. More specifically, this invention relates to a method for multi-color dithering, where a combination of color intensities are converted into a multi-color surface coverage.


The term “pixel” is used herein in its conventional meaning in the display art to mean the smallest unit of a display capable of generating all the colors which the display itself can show.


Half-toning has been used for many decades in the printing industry to represent gray tones by covering a varying proportion of each pixel of white paper with black ink. Similar half-toning schemes can be used with CMY or CMYK color printing systems, with the color channels being varied independently of each other.


However, there are many color systems in which the color channels cannot be varied independently of one another, in as much as each pixel can display any one of a limited set of primary colors (such systems may hereinafter be referred to as “limited palette displays” or “LPD's”); the ECD patent color displays are of this type. To create other colors, the primaries must be spatially dithered to produce the correct color sensation.


Electronic displays typically include an active matrix backplane, a master controller, local memory and a set of communication and interface ports. The master controller receives data via the communication/interface ports or retrieves it from the device memory. Once the data is in the master controller, it is translated into a set of instruction for the active matrix backplane. The active matrix backplane receives these instructions from the master controller and produces the image. In the case of a color device, on-device gamut computations may require a master controller with increased computational power. As indicated above, rendering methods for color electrophoretic displays are often computational intense, and although, as discussed in detail below, the present invention itself provides methods for reducing the computational load imposed by rendering, both the rendering (dithering) step and other steps of the overall rendering process may still impose major loads on device computational processing systems.


The increased computational power required for image rendering diminishes the advantages of electrophoretic displays in some applications. In particular, the cost of manufacturing the device increases, as does the device power consumption, when the master controller is configured to perform complicated rendering algorithms. Furthermore, the extra heat generated by the controller requires thermal management. Accordingly, at least in some cases, as for example when very high resolution images, or a large number of images need to be rendered in a short time, it may be desirable to have an efficient method for dithering multi-colored images.


SUMMARY OF INVENTION

Accordingly, in one aspect, the subject matter presented herein provides for a method for driving an electro-optic display, the method can include receiving an input image, processing the input image to create color separation cumulate, and dithering the input image by intersecting the color separation cumulate with a dither function.


In some embodiments, the dither function is a threshold array.


In another embodiment, the threshold array is a Blue Noise Mask (BNM).


In yet another embodiment, the step of processing is implemented by a look up table.





BRIEF DESCRIPTION OF DRAWINGS

The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.



FIG. 1 of the accompanying drawings is an image rendering model in accordance with the subject matter presented herein;



FIG. 2 is an exemplary black and white dithering method using masks in accordance with the subject matter presented herein;



FIG. 3 illustrates various mask designs in accordance with the subject matter presented herein;



FIG. 4 illustrates a gamut color mapping in accordance with the subject matter disclosed herein;



FIG. 5 illustrates a multi-color dithering method using masks in accordance with the subject matter disclosed herein;



FIG. 6 illustrates a multi-color dithering algorithm using masks in accordance with the subject matter disclosed herein; and



FIGS. 7-10 are various mask designs for multi-color dithering in accordance with the subject matter presented herein.





DETAILED DESCRIPTION

Standard dithering algorithms such as error diffusion algorithms (in which the “error” introduced by printing one pixel in a particular color which differs from the color theoretically required at that pixel is distributed among neighboring pixels so that overall the correct color sensation is produced) can be employed with limited palette displays. There is an enormous literature on error diffusion; for a review see Pappas, Thrasyvoulos N. “Model-based halftoning of color images,” IEEE Transactions on Image Processing 6.7 (1997): 1014-1024.


This application is also related to U.S. Pat. Nos. 5,930,026; 6,445,489; 6,504,524; 6,512,354; 6,531,997; 6,753,999; 6,825,970; 6,900,851; 6,995,550; 7,012,600; 7,023,420; 7,034,783; 7,061,166; 7,061,662; 7,116,466; 7,119,772; 7,177,066; 7,193,625; 7,202,847; 7,242,514; 7,259,744; 7,304,787; 7,312,794; 7,327,511; 7,408,699; 7,453,445; 7,492,339; 7,528,822; 7,545,358; 7,583,251; 7,602,374; 7,612,760; 7,679,599; 7,679,813; 7,683,606; 7,688,297; 7,729,039; 7,733,311; 7,733,335; 7,787,169; 7,859,742; 7,952,557; 7,956,841; 7,982,479; 7,999,787; 8,077,141; 8,125,501; 8,139,050; 8,174,490; 8,243,013; 8,274,472; 8,289,250; 8,300,006; 8,305,341; 8,314,784; 8,373,649; 8,384,658; 8,456,414; 8,462,102; 8,514,168; 8,537,105; 8,558,783; 8,558,785; 8,558,786; 8,558,855; 8,576,164; 8,576,259; 8,593,396; 8,605,032; 8,643,595; 8,665,206; 8,681,191; 8,730,153; 8,810,525; 8,928,562; 8,928,641; 8,976,444; 9,013,394; 9,019,197; 9,019,198; 9,019,318; 9,082,352; 9,171,508; 9,218,773; 9,224,338; 9,224,342; 9,224,344; 9,230,492; 9,251,736; 9,262,973; 9,269,311; 9,299,294; 9,373,289; 9,390,066; 9,390,661; and 9,412,314; and U.S. Patent Applications Publication Nos. 2003/0102858; 2004/0246562; 2005/0253777; 2007/0091418; 2007/0103427; 2007/0176912; 2008/0024429; 2008/0024482; 2008/0136774; 2008/0291129; 2008/0303780; 2009/0174651; 2009/0195568; 2009/0322721; 2010/0194733; 2010/0194789; 2010/0220121; 2010/0265561; 2010/0283804; 2011/0063314; 2011/0175875; 2011/0193840; 2011/0193841; 2011/0199671; 2011/0221740; 2012/0001957; 2012/0098740; 2013/0063333; 2013/0194250; 2013/0249782; 2013/0321278; 2014/0009817; 2014/0085355; 2014/0204012; 2014/0218277; 2014/0240210; 2014/0240373; 2014/0253425; 2014/0292830; 2014/0293398; 2014/0333685; 2014/0340734; 2015/0070744; 2015/0097877; 2015/0109283; 2015/0213749; 2015/0213765; 2015/0221257; 2015/0262255; 2015/0262551; 2016/0071465; 2016/0078820; 2016/0093253; 2016/0140910; and 2016/0180777. These patents and applications may hereinafter for convenience collectively be referred to as the “MEDEOD” (MEthods for Driving Electro-Optic Displays) applications, and are incorporated herein in their entirety by reference.


ECD systems exhibit certain peculiarities that must be taken into account in designing dithering algorithms for use in such systems. Inter-pixel artifacts are a common feature in such systems. One type of artifact is caused by so-called “blooming”; in both monochrome and color systems, there is a tendency for the electric field generated by a pixel electrode to affect an area of the electro-optic medium wider than that of the pixel electrode itself so that, in effect, one pixel's optical state spreads out into parts of the areas of adjacent pixels. Another kind of crosstalk is experienced when driving adjacent pixels brings about a final optical state, in the area between the pixels that differs from that reached by either of the pixels themselves, this final optical state being caused by the averaged electric field experienced in the inter-pixel region. Similar effects are experienced in monochrome systems, but since such systems are one-dimensional in color space, the inter-pixel region usually displays a gray state intermediate the states of the two adjacent pixel, and such an intermediate gray state does not greatly affect the average reflectance of the region, or it can easily be modeled as an effective blooming. However, in a color display, the inter-pixel region can display colors not present in either adjacent pixel.


The aforementioned problems in color displays have serious consequences for the color gamut and the linearity of the color predicted by spatially dithering primaries. Consider using a spatially dithered pattern of saturated Red and Yellow from the primary palette of an ECD display to attempt to create a desired orange color. Without crosstalk, the combination required to create the orange color can be predicted perfectly in the far field by using linear additive color mixing laws. Since Red and Yellow are on the color gamut boundary, this predicted orange color should also be on the gamut boundary. However, if the aforementioned effects produce (say) a blueish band in the inter-pixel region between adjacent Red and Yellow pixels, the resulting color will be much more neutral than the predicted orange color. This results in a “dent” in the gamut boundary, or, to be more accurate since the boundary is actually three-dimensional, a scallop. Thus, not only does a naïve dithering approach fail to accurately predict the required dithering, but it may as in this case attempt to produce a color which is not available since it is outside the achievable color gamut.


It may desirable for one to be able to predict the achievable gamut by extensive measurement of patterns or advanced modeling. This may be not be feasible if the number of device primaries is large, or if the crosstalk errors are large compared to the errors introduced by quantizing pixels to a primary colors. The present invention provides a dithering method that incorporates a model of blooming/crosstalk errors such that the realized color on the display is closer to the predicted color. Furthermore, the method stabilizes the error diffusion in the case that the desired color falls outside the realizable gamut, since normally error diffusion will produce unbounded errors when dithering to colors outside the convex hull of the primaries.


In some embodiments the reproduction of images may be performed using an Error-Diffusion model illustrated in FIG. 1 of the accompanying drawings. The method illustrated in FIG. 1 begins at an input 102, where color values xi,j are fed to a processor 104, where they are added to the output of an error filter 106 to produce a modified input ui,j, which may hereinafter be referred to as “error-modified input colors” or “EMIC”. The modified inputs ui,j are fed to a Quantizer 108.


In some embodiments, processes utilizing model-based error diffusion can become unstable, because the input image is assumed to lie in the (theoretical) convex hull of the primaries (i.e. the color gamut), but the actual realizable gamut is likely smaller due to loss of gamut because of dot overlap. Therefore, the error diffusion algorithm may be trying to achieve colors which cannot actually be achieved in practice and the error continues to grow with each successive “correction”. It has been suggested that this problem be contained by clipping or otherwise limiting the error, but this leads to other errors.


In practice, one solution would be to have a better, non-convex estimate of the achievable gamut when performing gamut mapping of the source image, so that the error diffusion algorithm can always achieve its target color. It may be possible to approximate this from the model itself, or determine it empirically. In some embodiments, the quantizer 108 examines the primaries for the effect that choosing each would have on the error, and the quantizer chooses the primary with the least (by some metric) error if chosen. However, the primaries fed to the quantizer 108 are not the natural primaries of the system, {Pk}, but are an adjusted set of primaries, {P˜k}, which allow for the colors of at least some neighboring pixels, and their effect on the pixel being quantized by virtue of blooming or other inter-pixel interactions.


One embodiment of the above method may use a standard Floyd-Steinberg error filter and processes pixels in raster order. Assuming, as is conventional, that the display is treated top-to-bottom and left-to-right, it is logical to use the above and left cardinal neighbors of pixel being considered to compute blooming or other inter-pixel effects, since these two neighboring pixels have already been determined. In this way, all modeled errors caused by adjacent pixels are accounted for since the right and below neighbor crosstalk is accounted for when those neighbors are visited. If the model only considers the above and left neighbors, the adjusted set of primaries must be a function of the states of those neighbors and the primary under consideration. The simplest approach is to assume that the blooming model is additive, i.e. that the color shift due to the left neighbor and the color shift due to the above neighbor are independent and additive. In this case, there are only “N choose 2” (equal to N*(N−1)/2) model parameters (color shifts) that need to be determined. For N=64 or less, these can be estimated from colorimetric measurements of checkerboard patterns of all these possible primary pairs by subtracting the ideal mixing law value from the measurement.


To take a specific example, consider the case of a display having 32 primaries. If only the above and left neighbors are considered, for 32 primaries there are 496 possible adjacent sets of primaries for a given pixel. Since the model is linear, only these 496 color shifts need to be stored since the additive effect of both neighbors can be produced during run time without much overhead. So for example if the unadjusted primary set comprises (P1 . . . P32) and your current up, left neighbors are P4 and P7, the modified primaries (P˜1 . . . P˜32), the adjusted primaries fed to the quantizer are given by:








P
1


=


P
1

+

d


P

(

1
,
4

)



+

d


P

(

1
,
7

)





;





……







P

3

2



=


P

3

2


+

d


P

(


3

2

,
4

)



+

d


P

(


3

2

,
7

)





,





where dP(i,j) are the empirically determined values in the color shift table.


More complicated inter-pixel interaction models are of course possible, for example nonlinear models, models taking account of corner (diagonal) neighbor, or models using a non-causal neighborhood for which the color shift at each pixel is updated as more of its neighbors are known.


The quantizer 108 compares the adjusted inputs u′i,j with the adjusted primaries {P˜k} and outputs the most appropriate primary yi,k to an output. Any appropriate method of selecting the appropriate primary may be used, for example a minimum Euclidean distance quantizer in a linear RGB space; this has the advantage of requiring less computing power than some alternative methods.


The yi,k output values from the quantizer 108 may be fed not only to the output but also to a neighborhood buffer 110, where they are stored for use in generating adjusted primaries for later-processed pixels. The modified input ui,j values and the output yi,j values are both supplied to a processor 112, which calculates:

ei,j=ui,j−yi,j

and passes this error signal on to the error filter 106 in the same way as described above with reference to FIG. 1.


However, in practice, error diffusion based methods may be slow for some applications because they are not easily parallelizable. Where the next pixel output cannot be completed until a previous pixel's output becomes available. Alternatively, masked based methods may be adopted because of their simplicity, where the output at each pixel depends only on that pixel's input and a value from a look-up-table (LUT), meaning, each output can be computed completely independently of others.


Referring now to FIG. 2, where an exemplary black and white dithering method is illustrated. As shown, an input grayscale image with normalized darkness values between 0 (white) and 1 (black) is dithered by comparing at each output location corresponding input darkness and dither threshold values. For example, if the darkness u(x) of an input image is higher than the dither threshold value T(x), then the output location is marked as black (i.e., 1), else it is marked as white (i.e., 0). FIG. 3 illustrates some mask designs in accordance with the subject matter disclosed herein.


In practice, when practicing multi-color dithering, it is assumed that the input colors to a dithering algorithm can be represented as a linear combination of multi-primaries. This may be achieved by dithering in the source space using gamut corners, or by gamut mapping the input to the device space color gamut. FIG. 4 illustrates one method of creating a color separation using a set of weights Px. Where each color C is defined as—









C
=





i
=
1

,







N







α
i



(
C
)




P
i








0


α
i


1

,




α
i


=
1








Where the partial sums of these weights is referred to as separation cumulate Λk(C), where








Λ
k



(
C
)


=





i
=
1

,







k






α
i



(
C
)







In practice, dithering to multiple colors consists in intersecting the relative cumulative amounts of colors with a dither function (e.g., threshold array T(x) 502 of FIG. 5). Referring now to FIG. 5, illustrated here as an example is a method to print with 4 different colors inks C1 512, C2 514, C3 516 and C4 518. At each pixel of the output pixmap, the color separation gives the relative percentages of each of the basic colors, for example d1 of color C1 512, d2 of color C2 514, d3 of color C3 516, and d4 of color C4 518. Where one of the colors, for example C4 518, may be white.


Extending dithering to multiple colors consists in intersecting the relative cumulative amounts of colors Λ1(x) 504=d1, Λ2(x) 506=d1+d2, Λ3(x) 508=d1+d2+d3, and Λ4(x) 510=d1+d2+d3+d4 with a threshold array T(x), as illustrated in FIG. 5. Illustrated in FIG. 5 is a dithering example for the purpose of explaining the subject matter presented herein. In the interval where Λ1(x) 504>T(x) 502, the output location or pixel region will be printed with basic color C1 512 (e.g., black); in the interval where Λ2(x) 506>T(x) 502, the output location or pixel region will display color C2 514 (e.g., yellow); in the interval where Λ3(x) 508>T(x) 502, the output location or pixel region will display color C3 516 (e.g., red); and in the remaining interval where Λ4(x) 510>T(x) 502 and Λ3(x) 508≤T(x) 502, the output location or pixel region will display color C4 518 (e.g., white). As such, multi-color dithering as presented herein will convert the relative amounts of d1, d2, d3, d4 of colors C1 512, C2 514, C3 516 and C4 518 into relative coverage percentages and ensures by construction that the contributing colors are printed side by side.


In some embodiments, a multi-color rendering algorithm as illustrated in FIG. 6 may be utilized in accordance with the subject matter disclosed herein. As shown, image data imi,j may be firstly fed through a sharpening filter 602, which may be optional in some embodiments. This sharpening filter 602 may be useful in some cases when a threshold array T(x) or filter is less sharp than an error diffusion system. This sharpening filter 602 may be a simple finite impulse response (FIR) filter, for example 3×3, which may be easily computed. Subsequently, color data may be mapped in a color mapping step 604, and color separation may be generated in a separation generation step 606 by methods commonly available in the art, such as using the Barycentric coordinate method, and this color data may be used to index a CSC_LUT look up table, which can have N-entries per index that gives the desired separation information in the form that is directly needed by the mask based dithering step (e.g., step 612). In some embodiments, this CSC_LUT look up table may be built by combining both a desired color enhancement and/or gamut mapping, and the chosen separation algorithm, and is configured to include a mapping between the input image's color values and the color separation cumulate. In this fashion, the look up table (e.g., CSC_LUT) may be designed to provide the desired separation cumulate information quickly and in the form that is directly needed by the mask based dithering step (e.g., step 612 with the quantizer). Finally, the separation cumulate data 608 is used with a threshold array 610 to generate an output yi,j using a quantizer 612 to generate multiple colors. In some embodiments, the color mapping 604, separation generation 606 and cumulate 608 step may be implemented as a single interpolated CSC_LUT look up table. In this configuration, the separation stage is not done by finding Barycentric coordinates in a tetrahedralization of the multi-primaries, but may be implemented by a look-up table, which allows more flexibility. In addition, output computed by the method illustrated herein is computed completely independently of the other outputs. Furthermore, the threshold array T(x) used herein may be a Blue Noise Mask (BNM), where various BNM designs are presented in FIG. 7-10.


It will be apparent to those skilled in the art that numerous changes and modifications can be made in the specific embodiments of the invention described above without departing from the scope of the invention. Accordingly, the whole of the foregoing description is to be interpreted in an illustrative and not in a limitative sense.

Claims
  • 1. A method for driving an electro-optic display having a plurality of display pixels, the method comprising: receiving an input image;processing the input image to create color separation cumulate; anddithering the input image by intersecting the color separation cumulate with a dither function.
  • 2. The method of claim 1 wherein the dither function is a threshold array.
  • 3. The method of claim 2 wherein the threshold array is a Blue Noise Mask (BNM).
  • 4. The method of claim 3 wherein the look up table includes a mapping between the input image's color values and the color separation cumulate.
  • 5. The method of claim 4 wherein the sharpening filter is a finite impulse response (FIR) filter.
  • 6. The method of claim 1 wherein the processing the input image step is implemented by a look up table.
  • 7. The method of claim 1 further comprising putting the input image through a sharpening filter before processing the input image.
  • 8. The method of claim 1, wherein the step of processing the input image to create color separation cumulate includes using a Barycentric coordinate method.
  • 9. An electro-optic display configured to carry out the method of claim 1 includes an electrophoretic display.
  • 10. The display according to claim 9 comprising rotating bichromal member, electrochromic or electro-wetting material.
  • 11. The electro-optic display according to claim 9 comprising an electrophoretic material comprising a plurality of electrically charged particles disposed in a fluid and capable of moving through the fluid under the influence of an electric field.
  • 12. The electro-optic display according to claim 11 wherein the electrically charged particles and the fluid are confined within a plurality of capsules or microcells.
  • 13. The electro-optic display according to claim 11 wherein the electrically charged particles and the fluid are present as a plurality of discrete droplets surrounded by a continuous phase comprising a polymeric material.
REFERENCE TO RELATED APPLICATIONS

This application is related to and claims priority to U.S. Provisional Application 63/108,855 filed on Nov. 2, 2020. The entire disclosures of the aforementioned application is herein incorporated by reference.

US Referenced Citations (197)
Number Name Date Kind
5930026 Jacobson Jul 1999 A
6017584 Albert et al. Jan 2000 A
6445489 Jacobson et al. Sep 2002 B1
6504524 Gates et al. Jan 2003 B1
6512354 Jacobson et al. Jan 2003 B2
6531997 Gates et al. Mar 2003 B1
6545797 Chen et al. Apr 2003 B2
6664944 Albert et al. Dec 2003 B1
6753999 Zehner et al. Jun 2004 B2
6788452 Liang et al. Sep 2004 B2
6825970 Goenaga et al. Nov 2004 B2
6900851 Morrison et al. May 2005 B2
6995550 Jacobson et al. Feb 2006 B2
7012600 Zehner et al. Mar 2006 B2
7023420 Comiskey et al. Apr 2006 B2
7034783 Gates et al. Apr 2006 B2
7038656 Liang et al. May 2006 B2
7038670 Liang et al. May 2006 B2
7046228 Liang et al. May 2006 B2
7052571 Wang et al. May 2006 B2
7054038 Ostromoukhov et al. May 2006 B1
7061166 Kuniyasu Jun 2006 B2
7061662 Chung et al. Jun 2006 B2
7075502 Drzaic et al. Jul 2006 B1
7116466 Whitesides et al. Oct 2006 B2
7119772 Amundson et al. Oct 2006 B2
7167155 Albert et al. Jan 2007 B1
7177066 Chung et al. Feb 2007 B2
7193625 Danner et al. Mar 2007 B2
7202847 Gates Apr 2007 B2
7259744 Arango et al. Aug 2007 B2
7327511 Whitesides et al. Feb 2008 B2
7385751 Chen et al. Jun 2008 B2
7408699 Wang et al. Aug 2008 B2
7453445 Amundson Nov 2008 B2
7492339 Amundson Feb 2009 B2
7492505 Liang et al. Feb 2009 B2
7528822 Amundson et al. May 2009 B2
7583251 Arango et al. Sep 2009 B2
7602374 Zehner et al. Oct 2009 B2
7612760 Kawai Nov 2009 B2
7667684 Jacobson et al. Feb 2010 B2
7679599 Kawai Mar 2010 B2
7683606 Kang et al. Mar 2010 B2
7729039 LeCain et al. Jun 2010 B2
7787169 Abramson et al. Aug 2010 B2
7800813 Wu et al. Sep 2010 B2
7839564 Whitesides et al. Nov 2010 B2
7859742 Chiu et al. Dec 2010 B1
7910175 Webber Mar 2011 B2
7952557 Amundson May 2011 B2
7952790 Honeyman et al. May 2011 B2
7982479 Wang et al. Jul 2011 B2
7982941 Lin et al. Jul 2011 B2
7999787 Amundson et al. Aug 2011 B2
8040594 Paolini, Jr. et al. Oct 2011 B2
8054526 Bouchard Nov 2011 B2
8077141 Duthaler et al. Dec 2011 B2
8098418 Paolini, Jr. et al. Jan 2012 B2
8125501 Amundson et al. Feb 2012 B2
8139050 Jacobson et al. Mar 2012 B2
8159636 Sun et al. Apr 2012 B2
8174490 Whitesides et al. May 2012 B2
8243013 Sprague et al. Aug 2012 B1
8274472 Wang et al. Sep 2012 B1
8289250 Zehner et al. Oct 2012 B2
8300006 Zhou et al. Oct 2012 B2
8314784 Ohkami et al. Nov 2012 B2
8363299 Paolini, Jr. et al. Jan 2013 B2
8373649 Low et al. Feb 2013 B2
8384658 Albert et al. Feb 2013 B2
8422116 Sprague et al. Apr 2013 B2
8456414 Lin et al. Jun 2013 B2
8462102 Wong et al. Jun 2013 B2
8503063 Sprague Aug 2013 B2
8514168 Chung et al. Aug 2013 B2
8537105 Chiu et al. Sep 2013 B2
8558783 Wilcox et al. Oct 2013 B2
8558786 Lin Oct 2013 B2
8558855 Sprague et al. Oct 2013 B2
8576164 Sprague et al. Nov 2013 B2
8576259 Lin et al. Nov 2013 B2
8576470 Paolini, Jr. et al. Nov 2013 B2
8576475 Huang et al. Nov 2013 B2
8605032 Liu et al. Dec 2013 B2
8605354 Zhang et al. Dec 2013 B2
8649084 Wang et al. Feb 2014 B2
8665206 Lin et al. Mar 2014 B2
8670174 Sprague et al. Mar 2014 B2
8681191 Yang et al. Mar 2014 B2
8704756 Lin Apr 2014 B2
8717664 Wang et al. May 2014 B2
8750390 Sun et al. Jun 2014 B2
8786935 Sprague Jul 2014 B2
8797634 Paolini, Jr. et al. Aug 2014 B2
8810525 Sprague Aug 2014 B2
8873129 Paolini, Jr. et al. Oct 2014 B2
8902153 Bouchard et al. Dec 2014 B2
8902491 Wang et al. Dec 2014 B2
8917439 Wang et al. Dec 2014 B2
8928562 Gates et al. Jan 2015 B2
8928641 Chiu et al. Jan 2015 B2
8941885 Nishikawa et al. Jan 2015 B2
8964282 Wang et al. Feb 2015 B2
8976444 Zhang et al. Mar 2015 B2
9013394 Lin Apr 2015 B2
9013783 Sprague Apr 2015 B2
9019197 Lin Apr 2015 B2
9019198 Lin et al. Apr 2015 B2
9019318 Sprague et al. Apr 2015 B2
9082352 Cheng et al. Jul 2015 B2
9116412 Lin Aug 2015 B2
9146439 Zhang Sep 2015 B2
9171508 Sprague et al. Oct 2015 B2
9182646 Paolini, Jr. et al. Nov 2015 B2
9195111 Anseth et al. Nov 2015 B2
9199441 Danner Dec 2015 B2
9218773 Sun et al. Dec 2015 B2
9224338 Chan et al. Dec 2015 B2
9224342 Sprague et al. Dec 2015 B2
9224344 Chung et al. Dec 2015 B2
9230492 Harrington et al. Jan 2016 B2
9251736 Lin et al. Feb 2016 B2
9262973 Wu et al. Feb 2016 B2
9285649 Du et al. Mar 2016 B2
9299294 Lin et al. Mar 2016 B2
9341916 Telfer et al. May 2016 B2
9360733 Wang et al. Jun 2016 B2
9361836 Telfer et al. Jun 2016 B1
9390066 Smith et al. Jul 2016 B2
9390661 Chiu et al. Jul 2016 B2
9423666 Wang et al. Aug 2016 B2
9459510 Lin Oct 2016 B2
9460666 Sprague et al. Oct 2016 B2
9495918 Harrington et al. Nov 2016 B2
9501981 Lin et al. Nov 2016 B2
9513527 Chan et al. Dec 2016 B2
9513743 Sjodin et al. Dec 2016 B2
9514667 Lin Dec 2016 B2
9541814 Lin et al. Jan 2017 B2
9612502 Danner et al. Apr 2017 B2
9613587 Halfman et al. Apr 2017 B2
9620048 Sim et al. Apr 2017 B2
9671668 Chan et al. Jun 2017 B2
9672766 Sjodin Jun 2017 B2
9691333 Cheng et al. Jun 2017 B2
9721495 Harrington et al. Aug 2017 B2
9759980 Du et al. Sep 2017 B2
9792861 Chang et al. Oct 2017 B2
9792862 Hung et al. Oct 2017 B2
9812073 Lin et al. Nov 2017 B2
10027963 Su et al. Jul 2018 B2
10162242 Wang et al. Dec 2018 B2
10209556 Rosenfeld et al. Feb 2019 B2
10229641 Yang et al. Mar 2019 B2
10319313 Harris et al. Jun 2019 B2
10339876 Lin et al. Jul 2019 B2
10467984 Buckley et al. Nov 2019 B2
10672350 Amundson et al. Jun 2020 B2
11151951 Lin et al. Oct 2021 B2
20030102858 Jacobson et al. Jun 2003 A1
20040246562 Chung et al. Dec 2004 A1
20050253777 Zehner et al. Nov 2005 A1
20070103427 Zhou et al. May 2007 A1
20070176912 Beames et al. Aug 2007 A1
20080024429 Zehner Jan 2008 A1
20080024482 Gates et al. Jan 2008 A1
20080043318 Whitesides et al. Feb 2008 A1
20080136774 Harris et al. Jun 2008 A1
20080303780 Sprague et al. Dec 2008 A1
20090225398 Duthaler et al. Sep 2009 A1
20100156780 Jacobson et al. Jun 2010 A1
20100194733 Lin et al. Aug 2010 A1
20100194789 Lin et al. Aug 2010 A1
20100220121 Zehner et al. Sep 2010 A1
20100265561 Gates et al. Oct 2010 A1
20110043543 Chen et al. Feb 2011 A1
20110063314 Chiu et al. Mar 2011 A1
20110175875 Lin et al. Jul 2011 A1
20110221740 Yang et al. Sep 2011 A1
20120001957 Liu et al. Jan 2012 A1
20120098740 Chiu et al. Apr 2012 A1
20130046803 Parmar et al. Feb 2013 A1
20130063333 Arango et al. Mar 2013 A1
20130249782 Wu et al. Sep 2013 A1
20140055840 Zang et al. Feb 2014 A1
20140078576 Sprague Mar 2014 A1
20140204012 Wu et al. Jul 2014 A1
20140240210 Wu et al. Aug 2014 A1
20140253425 Zalesky et al. Sep 2014 A1
20140293398 Wang et al. Oct 2014 A1
20140362213 Tseng Dec 2014 A1
20150262255 Khajehnouri et al. Sep 2015 A1
20150268531 Wang et al. Sep 2015 A1
20150301246 Zang et al. Oct 2015 A1
20160180777 Lin et al. Jun 2016 A1
20190080666 Chappalli Mar 2019 A1
Non-Patent Literature Citations (4)
Entry
Korean Intellectual Property Office, “International Search Report and Written Opinion”, PCT/US2021/057648, dated Feb. 16, 2022.
Pappas, Thrasyvoulos N. “Model-based halftoning of color images.” IEEE Transactions on image processing 6.7 (1997): 1014-24.
Ostromoukhov, Victor et al., “Multi-Color and Artistic Dithering”, Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press / Addison-Wesley Publishing Co. (1999).
Ulichney, Robert A., “Void-and-Cluster Method for Dither Array Generation”, Human Vision Visual Processing and Digital Display IV, vol. 1913, International Society for Optics and Photonics, (1993).
Related Publications (1)
Number Date Country
20220139341 A1 May 2022 US
Provisional Applications (1)
Number Date Country
63108855 Nov 2020 US