1. Field of the Invention
The present invention relates to the rendering of digital image data, and in particular, to the binary or multilevel representation of images for printing or display purposes
2. Background Description
Since images constitute an effective means of communicating information, displaying images should be as convenient as displaying text. However, many display devices, such as laser and ink jet printers, print only in a binary fashion. Furthermore, some image format standards only allow binary images. For example, the WAP1.1 (Wireless Application Protocol) protocol specification allows only for one graphic format, WBMP, a one (1) bit version of the BMP (bitmap) format. Besides allowing only binary images, some image format standards and some displays only allow images of a limited number of pixels. In the WAP 1.1 standard, a WBMP image should not be larger than 150×150 pixels. Some WAP devices have screens that are very limited in terms of the number of pixels. For example, one WAP device has a screen that is 96 pixels wide by 65 pixels high. In order to render a digitized continuous tone input image using a binary output device, the image has to be converted to a binary image.
The process of converting a digitized continuous tone input image to a binary image so that the binary image appears to be a continuous tone image is known as digital halftoning.
In one type of digital halftoning processes, ordered dither digital halftoning, the input digitized continuous tone image is compared, on a pixel by pixel basis, to a threshold taken from a threshold array. Many ordered dither digital halftoning methods suffer from low frequency artifacts. Because the human vision system has greater sensitivity at low frequencies (less than 12 cycles/degree), such low frequency artifacts are very noticeable.
The visibility of low frequency artifacts in ordered dither digital halftoning methods has led to the development of methods producing binary images with a power spectrum having mostly higher frequency content, the so called “blue noise methods”.
The most frequently used “blue noise method” is the error diffusion method. In an error diffusion halftoning system, an input digital image In (the digitized continuous tone input image) is introduced into the system on a pixel by pixel basis, where n represents the input image pixel number. Each input pixel has its corresponding error value En−1, where En−1 is the error value of the previous pixel (n−1), added to the input value In at a summing node, resulting in modified image data. The modified image data, the sum of the input value and the error value of the previous pixel (In+En−1), is passed to a threshold comparator. The modified image data is compared to the constant threshold value T.O, to determine the appropriate output level On. Once the output level On is determined, it is subtracted from the modified image value to produce the input to an error filter. The error filter allocates its input, In−On, to subsequent pixels based upon an appropriate weighting scheme. Various weighting techniques may be used generate the error level E.n for the subsequent input pixel. The cyclical processing of pixels is continued until the end of the input data is reached. (For a more complete description of error diffusion see, for example, “Digital Halftoning”, by Robert Ulichney, MIT Press, Cambridge, Mass. and London, England, 1990, pp. 239-319).
Although the error diffusion method presents an improvement over many ordered dither methods, artifacts are still present. There is an inherent edge enhancement in the error diffusion method. Other known artifacts produced by the error diffusion method include artifacts called “worms” and “snowplowing” which degrade image quality.
In U.S. Pat. No. 5,045,952, Eschbach disclosed selectively modifying the threshold level on a pixel by pixel basis in order to increase or decrease the edge enhancement of the output digital image. The improvements disclosed by Eschbach do not allow the control of the edge enhancement by controlling the high frequency portion of the error. Also, the improvements disclosed by Eschbach do not introduce parameters that can be selected to produce the image of the highest perceptual quality at a specific output device.
In U.S. Pat. No. 5,757,976, Shu disclosed utilizing a set of error filters having different sizes for diffusing the input of the error filter among neighboring pixels in predetermined tonal areas of an image and adding “noise” to the threshold in order to achieve a smooth halftone image quality. The improvements disclosed by Shu do not introduce parameters that can be selected to produce the image of the highest perceptual quality at a specific output device.
It is the primary object of this invention to provide a method for generating a halftone image from a digitized continuous tone input image that provides adjustment of the local contrast of the resulting halftone image, minimizes artifacts and is easily implemented.
It is also an object of this invention to provide a method for generating a halftone image with parameters that can be selected to produce the image of highest quality at a specific output device.
To achieve the objects of this invention, one aspect of this invention includes an adaptive halftoning method where the difference between a digital image and a filtered digital image is introduced into the system on a pixel by pixel basis; each input difference pixel having a corresponding error value, generated from the previous pixels, added to the input value at a summing node, resulting in modified image difference data; the modified image difference data being passed to a threshold comparator where the modified image difference data is compared to a threshold value, the threshold value varying according to the properties of the digital image, to determine the appropriate output level; the output level is subtracted from the modified image difference value to produce the input to an error filter; the output of the error filter is multiplied by a adaptation coefficient, where the adaptation coefficient varies according to the properties of the digital image, to generate the error level for the subsequent input pixel; and, the cyclical processing of pixels is continued until the end of the input data is reached.
In another aspect of this invention, in the method described above, a histogram modification is performed on the image, and the difference between the histogram modified digital image and the filtered digital image is introduced into the system on a pixel by pixel basis.
In still another aspect of this invention, in the method described above, the histogram modification is performed on the difference between the digital image and the filtered digital image and the histogram modified difference is introduced into the system on a pixel by pixel basis.
In a further aspect of this invention, in the method described above, the selectively changing of the adaptation coefficient comprises dividing the difference between the value at the pixel and the filtered value at the pixel by the filtered value at the pixel, multiplying the absolute value of the result of the division by a first parameter, and adding a second parameter to the result of the multiplication, thereby obtaining the coefficient.
In still another aspect of this invention, in the method described above, the threshold calculation comprises multiplying the filtered value at the pixel by a third parameter.
In still another aspect of this invention, in the method described above and including the adaptation coefficient and threshold calculated as in the two preceding paragraphs, where the filter is a filter of finite extent, the extent of the filter, the first, second parameters and third parameters are selected to produce the image of the highest perceptual quality at a specific output device.
The methods, systems and computer readable code of this invention can be used to generate halftone images in order to obtain images of the highest perceptual quality when rendered on displays and printers. The methods, systems and computer readable code of this invention can also be used to for the design of computer generated holograms and for the encoding of the continuous tone input data.
The novel features that are considered characteristic of the invention are set forth with particularity in the appended claims. The invention itself, however, both as to its organization and its method of operation, together with other objects and advantages thereof will be best understood from the following description of the illustrated embodiment when read in connection with the accompanying drawings wherein:
A method and system, for generating a halftone image from a digitized continuous tone input image, that provide adjustment of the local contrast of the resulting halftone image, minimizes artifacts, are easily implemented and contain parameters that can be selected on the basis of device characteristics like brightness, dynamic range, and pixel count, to produce the image of highest perceptual quality at a specific output device are disclosed.
A block diagram of selected components of an embodiment of a system of this invention for generating a halftone image from a digitized continuous tone input image (also referred to as a digital image) is shown in
Avn32 h( . . . ,Ik, . . . , I.n, . . . ) (1)
where h is a functional form spanning a number of pixels. It should be apparent that the input digital image 10 can be a two dimensional array of pixel values and that the array can be represented as a linear array by using such approaches as raster representations or serpentine representation. For a two dimensional array of pixel values, the filter 20 will also be a two dimensional array of filter coefficients and can also be represented as a linear array. The functional forms will be shown in the one dimensional form for ease of interpretation.
In one embodiment: the output of the filtering block 20 has the form
Avn={Σn−Nn+NIj}/(2N+1) (2)
If the filtering block 20 comprises a linear filter, Avn will be given by a sum of terms, each term comprising the product of an input image pixel value multiplied by a filter coefficient.
It should be apparent that special consideration has to be given to the pixels at the boundaries of the image. For example, the calculations can be started N pixels from the boundary in equation (2). In that case the calculated and halftone image are smaller than the input image. In another case, the image is continued at the boundaries, the continuation pixels having the same value as the boundary pixel. It should be apparent that other methods of taking into account the effect of the boundaries can be used.
The output of the filtering block 20, Avn, is subtracted from the input digital image I.n at node 25, resulting in a difference value, Dn. In the embodiment in which histogram modification is not included, Dn is the input to a summing node 70. At the summing node 70, a corresponding error value En−1, where En−1 is the error value accumulated from the previous pixels, is added to the input value Dn resulting in a modified image datum. The modified image data, Dn+En−1, is compared to the output of the threshold calculation block 30 in the threshold comparison block 40 to produce the halftoning output, On. (In the case of a binary output device, if the modified image datum is above the threshold, the output level is the white level. Otherwise, the output level is the black level.) Once the output level On is determined, it is subtracted from the modified image value to produce the input to an error filter block 50. The error filter block 50 allocates its input, Dn+En−1−On, to subsequent pixels based upon an appropriate weighting scheme. The weighted contributions of the error filter block 50 input are stored and all the contributions to the next input pixel are summed to produce the output of the error filter block 50, the error value. The output of the error filter block 50, the error value, is multiplied by the adaptation coefficient in block 60 to generate the error level E.n for the subsequent input pixel. The cyclical processing of pixels, as further described below, is continued until the end of the input data is reached.
Referring again to
t( . . . , Ik, . . . , I.n, . . . ) (3)
where t is a functional form spanning a number of pixels. The form in equation (3) allows the varying of the threshold according to properties of the digital image.
In one embodiment,
t( . . . ,Ik, . . . , I.n, . . . )=C0{Σn−Nn+NIj}/(2N+1) (4)
In another embodiment, the output of the threshold calculation block is a linear combination of terms, each term comprising the product of an input image pixel value multiplied by a coefficient. It should be apparent that this embodiment can also be expressed as a function times a parameter.
The output of the threshold calculation block 30 is the threshold.
The first pixel value to be processed, IO, produces a difference value DO from summing node 25 and produces a value of DO out of summing node 70 (since E−1 is equal to 0). DO is then compared to the threshold producing an output of OO. At summing node 45, OO is subtracted from DO to produce the input to the error filter 50. The error filter 50 allocates its input, DO−OO, to subsequent pixels based upon an appropriate weighting scheme which determines how much the current input contributes to each subsequent pixel. Various weighting techniques may be used (see, for example, “Digital Halftoning” by Robert Ulichney, MIT Press, Cambridge, Mass. and London, England, 1990, pp. 239-319). The output of error filter 50 is multiplied by a adaptation coefficient 60. The adaptation coefficient 60 is the output of the coefficient calculation block 80. In one embodiment, the output of the coefficient calculation block 80 has the form
C1+C2abs{f( . . . ,Ik, . . . , I.n, . . . ,)/g( . . . ,Ik, . . . , I.n, . . . )} (5)
where f and g are functional forms spanning a number of pixels. The form of Equation (5) allows the selective changing, of the coefficient according to the local properties of the digital image. C1 and C2 and the parameter in the threshold expression can be selected to produce the image of highest perceptual quality at a specific output device.
In another embodiment, the output of the coefficient calculation block 80 has the form
C1+C2{abs((I.n−({Σn−Nn+NIj}/(2N+1)))/({Σn−Nn+NIj}/(2N+1))))} (6)
The input of error filter block 50 is multiplied by weighting coefficients and stored. All the contributions from the stored weighted values to the next pixel are summed to produce the out put of the error filter block 50. The output of the error filter block 50 is multiplied by the adaptation coefficient 60. The delay block 65 stores the result of the product of the adaptation coefficient 60 and the output of the error filter block 50. (In one embodiment, the Floyd-Steinberg filter, the input to the error filter is distributed according to the filter weights to the next pixel in the processing line and to neighboring pixels in the following line.) The output of delay block 65 is En−1 and is delayed by one pixel. (When the first pixel is processed, the output of the delay, EO, is added to the subsequent difference, D1.)
It should be apparent that the sequence order of error filter block 50 and the adaptation coefficient block 60 can be interchanged with similar results. In the embodiment in which the adaptation coefficient 60 multiplies the difference between the modified image datum and the output level, shown in
When the next pixel, I1, is introduced into the system from the image input block 10, it produces a difference value D1 from summing node 25 and produce a value of (D1+EO) out of summing node 70.
The above steps repeat for each subsequent pixel in the digital image thereby producing a halftone image, the sequence OO, O1, . . . , On. The modification of the threshold level and the adaptation coefficient allows control of the amount of edge enhancement and provides the opportunity to reduce artifacts.
In the embodiment in which histogram modification is included after the summing node 25, Dn is the input to the histogram modification block 75 and the output of the histogram modification block 75 is the input to the summing node 70. The above description follows if Dn is replaced by the output of the histogram modification block 75. It should be apparent that histogram modification operates on the entire difference image. (Histogram modification is well known to those skilled in the art. For a discussion of histogram modification, see, for example, Digital Image Processing, by William K. Pratt, John Wiley and Sons, 1978, ISBN 0-471-01888-0, pp. 311-318. For a discussion of histogram equalization, a form of histogram modification, see, for example, Digital Image Processing, by R. C. Gonzalez and P. Wintz, Addison-Wesley Publishing Co., 1977, ISBN 0-201-02596-3, pp. 119-126.)
In the embodiment in which histogram modification is included after the image input block 10, Dn is the difference between the output of the histogram modification block 75 (
The method described above produces improvements of the error diffusion method by utilizing the difference between the digital image and the filtered digital image as input into the system instead of the digital image, by multiplying the .the output of the error filter by the adaptation coefficient, where the adaptation coefficient varies according to the properties of the digital image, and by using a threshold value that varies according to the properties of the digital image to determine the appropriate output level.
Sample Embodiment
In a specific embodiment, shown in
t( . . . ,Ik, . . . , I.n, . . . )=COAvn (7)
which is the same function as in Equation 4 when the output of the filtering block 20, Avn, is given by Equation (2). The output of the coefficient calculation block 80 depends on the output of the filtering block 20, Avn, and the difference Dn and is given by
C1+C2{abs((Dn−Avn)/Avn)} (8)
When the output of the filtering block 20, Avn, is given by Equation (2), Equation (8) is the same as Equation (6).
Histogram equalization is included after the summing node 25. The processing of the input image pixels 10 occurs as described in the preceding section.
The value of N in Equation (2) (the extent of the filter), CO, C1, and C2 (first, second parameters and third parameters) can be selected to produce the image of highest perceptual quality at a specific output device. For a WBMP image on a specific monochrome mobile phone display, utilizing a Floyd-Steinberg error filter, the following parameters yield images of high perceptual quality:
N=7,
CO=−20,
C1=0.05, and
C2=1.
In another embodiment, shown in
The embodiments described herein can also be expanded to include composite images, such as color images, where each color component might be treated individually by the algorithm. In the case of color input images, the value of N in Equation (2) (the extent of the filter), CO, C1, and C2 (first, second parameters and third parameters) can be selected to control the color difference at a color transition while minimizing any effects on the brightness at that location. Other possible applications of these embodiments include the design of computer generated holograms and the encoding of the continuous tone input data.
Although the embodiments described herein are most easily understood for binary output devices, the embodiments described herein can also be expanded to include rendering an output image when the number of gray levels in the image exceeds that of obtainable in the rendering device. It should be apparent how to expand the embodiments described herein to M-ary displays or M-ary rendering devices (see, for example, “Digital Halftoning” by Robert Ulichney, MIT Press, Cambridge, Mass., and London, England, 1990, p. 341).
It should be appreciated that the various embodiments described above are provided merely for purposes of example and do not constitute limitations of the present invention. Rather, various other embodiments are also within the scope of the claims, such as the following. The filter 20 can be selected to impart the desired functional behavior of the difference. The filter 20 can, for example, be a DC preserving filter. The threshold 40 and the adaptation coefficient 60 can also be selected to impart the desired characteristics of the image.
It should be apparent that Equations (4) and (5) are exemplary forms of functional expressions with parameters that can be adjusted. Functional expressions for the threshold and the adaptation coefficient ,where the expressions include parameters that can be adjusted, will satisfy the object of this invention.
In general, the techniques described above may be implemented, for example, in hardware, software, firmware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on a programmable computer including a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to data entered using the input device to perform the functions described and to generate output information. The output information may be applied to one or more output devices.
Elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions.
Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may be a compiled or interpreted programming language. Each computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps of the invention may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output.
The generation of the halftone image can occur at a location remote from the rendering printer or display. The operations performed in software utilize instructions (“code”) that are stored in computer-readable media and store results and intermediate steps in computer-readable media.
Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CDROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. Electrical, electromagnetic or optical signals that carry digital data streams representing various types of information are exemplary forms of carrier waves transporting the information.
Other embodiments of the invention, including combinations, additions, variations and other modifications of the disclosed embodiments will be obvious to those skilled in the art and are within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
3820133 | Adorney et al. | Jun 1974 | A |
3864708 | Allen | Feb 1975 | A |
4070587 | Hanakata | Jan 1978 | A |
4072973 | Mayo | Feb 1978 | A |
4089017 | Buldini | May 1978 | A |
4154523 | Rising et al. | May 1979 | A |
4168120 | Freier et al. | Sep 1979 | A |
4284876 | Ishibashi et al. | Aug 1981 | A |
4309712 | Iwakura | Jan 1982 | A |
4347518 | Williams et al. | Aug 1982 | A |
4364063 | Anno et al. | Dec 1982 | A |
4385302 | Moriguchi et al. | May 1983 | A |
4391535 | Palmer | Jul 1983 | A |
4415908 | Sugiura | Nov 1983 | A |
4443121 | Arai | Apr 1984 | A |
4447818 | Kurata et al. | May 1984 | A |
4464669 | Sekiya et al. | Aug 1984 | A |
4514738 | Nagato et al. | Apr 1985 | A |
4524368 | Inui et al. | Jun 1985 | A |
4540992 | Moteki et al. | Sep 1985 | A |
4563691 | Noguchi et al. | Jan 1986 | A |
4607262 | Moriguchi et al. | Aug 1986 | A |
4638372 | Leng et al. | Jan 1987 | A |
4686549 | Williams et al. | Aug 1987 | A |
4688051 | Kawakami et al. | Aug 1987 | A |
4704620 | Ichihashi et al. | Nov 1987 | A |
4738526 | Larish | Apr 1988 | A |
4739344 | Sullivan et al. | Apr 1988 | A |
4777496 | Maejima et al. | Oct 1988 | A |
4805033 | Nishikawa | Feb 1989 | A |
4809063 | Moriguchi et al. | Feb 1989 | A |
4884080 | Hirahara et al. | Nov 1989 | A |
4907014 | Tzeng et al. | Mar 1990 | A |
4933709 | Manico et al. | Jun 1990 | A |
4962403 | Goodwin et al. | Oct 1990 | A |
5006866 | Someya | Apr 1991 | A |
5045952 | Eschbach | Sep 1991 | A |
5046118 | Ajewole et al. | Sep 1991 | A |
5066961 | Yamashita | Nov 1991 | A |
5086306 | Sasaki | Feb 1992 | A |
5086484 | Katayama et al. | Feb 1992 | A |
5109235 | Sasaki | Apr 1992 | A |
5115252 | Sasaki | May 1992 | A |
5130821 | Ng | Jul 1992 | A |
5132703 | Nakayama | Jul 1992 | A |
5132709 | West | Jul 1992 | A |
5162813 | Kuroiwa et al. | Nov 1992 | A |
5184150 | Sugimoto | Feb 1993 | A |
5208684 | Itoh | May 1993 | A |
5244861 | Campbell et al. | Sep 1993 | A |
5248995 | Izumi | Sep 1993 | A |
5268706 | Sakamoto | Dec 1993 | A |
5285220 | Suzuki et al. | Feb 1994 | A |
5307425 | Otsuka | Apr 1994 | A |
5323245 | Rylander | Jun 1994 | A |
5333246 | Nagasaka | Jul 1994 | A |
5422662 | Fukushima et al. | Jun 1995 | A |
5450099 | Stephenson et al. | Sep 1995 | A |
5455685 | Mori | Oct 1995 | A |
5469203 | Hauschild | Nov 1995 | A |
5479263 | Jacobs et al. | Dec 1995 | A |
5497174 | Stephany et al. | Mar 1996 | A |
5521626 | Tanaka et al. | May 1996 | A |
5539443 | Mushika et al. | Jul 1996 | A |
5569347 | Obata et al. | Oct 1996 | A |
5576745 | Matsubara | Nov 1996 | A |
5602653 | Curry | Feb 1997 | A |
5617223 | Burns et al. | Apr 1997 | A |
5623297 | Austin et al. | Apr 1997 | A |
5623581 | Attenberg | Apr 1997 | A |
5625399 | Wiklof et al. | Apr 1997 | A |
5642148 | Fukushima et al. | Jun 1997 | A |
5644351 | Matsumoto et al. | Jul 1997 | A |
5646672 | Fukushima | Jul 1997 | A |
5664253 | Meyers | Sep 1997 | A |
5668638 | Knox | Sep 1997 | A |
5694484 | Cottrell et al. | Dec 1997 | A |
5703644 | Mori et al. | Dec 1997 | A |
5706044 | Fukushima | Jan 1998 | A |
5707082 | Murphy | Jan 1998 | A |
5711620 | Sasaki et al. | Jan 1998 | A |
5719615 | Hashiguchi et al. | Feb 1998 | A |
5721578 | Nakai et al. | Feb 1998 | A |
5724456 | Boyack et al. | Mar 1998 | A |
5729274 | Sato | Mar 1998 | A |
5757976 | Shu | May 1998 | A |
5777599 | Poduska, Jr. | Jul 1998 | A |
5781315 | Yamaguchi | Jul 1998 | A |
5784092 | Fukuoka | Jul 1998 | A |
5786837 | Kaerts et al. | Jul 1998 | A |
5786900 | Sawano | Jul 1998 | A |
5800075 | Katsuma et al. | Sep 1998 | A |
5808653 | Matsumoto et al. | Sep 1998 | A |
5809164 | Hultgren, III | Sep 1998 | A |
5809177 | Metcalfe et al. | Sep 1998 | A |
5818474 | Takahashi et al. | Oct 1998 | A |
5818975 | Goodwin et al. | Oct 1998 | A |
5835244 | Bestmann | Nov 1998 | A |
5835627 | Higgins et al. | Nov 1998 | A |
5841461 | Katsuma | Nov 1998 | A |
5859711 | Barry et al. | Jan 1999 | A |
5870505 | Wober et al. | Feb 1999 | A |
5880777 | Savoye et al. | Mar 1999 | A |
5889546 | Fukuoka | Mar 1999 | A |
5897254 | Tanaka et al. | Apr 1999 | A |
5913019 | Attenberg | Jun 1999 | A |
5956067 | Isono et al. | Sep 1999 | A |
5956421 | Tanaka et al. | Sep 1999 | A |
5970224 | Salgado et al. | Oct 1999 | A |
5978106 | Hayashi | Nov 1999 | A |
5995654 | Buhr et al. | Nov 1999 | A |
5999204 | Kojima | Dec 1999 | A |
6005596 | Yoshida et al. | Dec 1999 | A |
6028957 | Katori et al. | Feb 2000 | A |
6069982 | Reuman | May 2000 | A |
6104421 | Iga et al. | Aug 2000 | A |
6104468 | Bryniarski et al. | Aug 2000 | A |
6104502 | Shiomi | Aug 2000 | A |
6106173 | Suzuki et al. | Aug 2000 | A |
6108105 | Takeuchi et al. | Aug 2000 | A |
6128099 | Delabastita | Oct 2000 | A |
6128415 | Hultgren, III et al. | Oct 2000 | A |
6133983 | Wheeler | Oct 2000 | A |
6157459 | Shiota et al. | Dec 2000 | A |
6172768 | Yamada et al. | Jan 2001 | B1 |
6186683 | Shibuki | Feb 2001 | B1 |
6204940 | Lin et al. | Mar 2001 | B1 |
6208429 | Anderson | Mar 2001 | B1 |
6226021 | Kobayashi et al. | May 2001 | B1 |
6233360 | Metcalfe et al. | May 2001 | B1 |
6243133 | Spaulding et al. | Jun 2001 | B1 |
6263091 | Jain et al. | Jul 2001 | B1 |
6282317 | Luo et al. | Aug 2001 | B1 |
6293651 | Sawano | Sep 2001 | B1 |
6402283 | Schulte | Jun 2002 | B2 |
6425699 | Doval | Jul 2002 | B1 |
6447186 | Oguchi et al. | Sep 2002 | B1 |
6456388 | Inoue et al. | Sep 2002 | B1 |
6462835 | Loushin et al. | Oct 2002 | B1 |
6501566 | Ishiguro et al. | Dec 2002 | B1 |
6537410 | Arnost et al. | Mar 2003 | B2 |
6563945 | Holm | May 2003 | B2 |
6567111 | Kojima et al. | May 2003 | B2 |
6577751 | Yamamoto | Jun 2003 | B2 |
6583852 | Baum et al. | Jun 2003 | B2 |
6608926 | Suwa | Aug 2003 | B1 |
6614459 | Fujimoto et al. | Sep 2003 | B2 |
6628417 | Naito et al. | Sep 2003 | B1 |
6628823 | Holm | Sep 2003 | B1 |
6628826 | Gilman et al. | Sep 2003 | B1 |
6628899 | Kito | Sep 2003 | B1 |
6650771 | Walker | Nov 2003 | B1 |
6661443 | Bybell et al. | Dec 2003 | B2 |
6671063 | Iida | Dec 2003 | B1 |
6690488 | Reuman | Feb 2004 | B1 |
6694051 | Yamazoe et al. | Feb 2004 | B1 |
6711285 | Noguchi | Mar 2004 | B2 |
6760489 | Kuwata | Jul 2004 | B1 |
6762855 | Goldberg et al. | Jul 2004 | B1 |
6771832 | Naito et al. | Aug 2004 | B1 |
6819347 | Saquib et al. | Nov 2004 | B2 |
6826310 | Trifonov et al. | Nov 2004 | B2 |
6842186 | Bouchard et al. | Jan 2005 | B2 |
6906736 | Bouchard et al. | Jun 2005 | B2 |
6937365 | Gorian et al. | Aug 2005 | B2 |
6956967 | Gindele et al. | Oct 2005 | B2 |
6999202 | Bybell et al. | Feb 2006 | B2 |
7050194 | Someno et al. | May 2006 | B1 |
7092116 | Calaway | Aug 2006 | B2 |
7127108 | Kinjo et al. | Oct 2006 | B2 |
7129980 | Ashida | Oct 2006 | B1 |
7154621 | Rodriguez et al. | Dec 2006 | B2 |
7154630 | Nimura et al. | Dec 2006 | B1 |
7167597 | Matsushima | Jan 2007 | B2 |
7200265 | Imai | Apr 2007 | B2 |
7224476 | Yoshida | May 2007 | B2 |
7260637 | Kato | Aug 2007 | B2 |
7272390 | Adachi et al. | Sep 2007 | B1 |
7283666 | Saquib | Oct 2007 | B2 |
7336775 | Tanaka et al. | Feb 2008 | B2 |
7548260 | Yamaguchi | Jun 2009 | B2 |
7557950 | Hatta et al. | Jul 2009 | B2 |
20030021478 | Yoshida | Jan 2003 | A1 |
20030038963 | Yamaguchi | Feb 2003 | A1 |
20040073783 | Ritchie | Apr 2004 | A1 |
20040179226 | Burkes et al. | Sep 2004 | A1 |
20040207712 | Bouchard et al. | Oct 2004 | A1 |
20050005061 | Robins | Jan 2005 | A1 |
20050219344 | Bouchard | Oct 2005 | A1 |
20070036457 | Saquib | Feb 2007 | A1 |
20080017026 | Dondlinger | Jan 2008 | A1 |
20090128613 | Bouchard et al. | May 2009 | A1 |
Number | Date | Country |
---|---|---|
0 204 094 | Apr 1986 | EP |
0 454 495 | Oct 1991 | EP |
0 454 495 | Oct 1991 | EP |
0 619 188 | Oct 1994 | EP |
0 625 425 | Nov 1994 | EP |
0 626 611 | Nov 1994 | EP |
0 791 472 | Feb 1997 | EP |
0 762 736 | Mar 1997 | EP |
0 773 470 | May 1997 | EP |
0 939 359 | Sep 1999 | EP |
1 004 442 | May 2000 | EP |
1 056 272 | Nov 2000 | EP |
1 078 750 | Feb 2001 | EP |
1 137 247 | Sep 2001 | EP |
1 201 449 | Oct 2001 | EP |
1 392 514 | Sep 2005 | EP |
0 933 679 | Apr 2008 | EP |
1 393 544 | Feb 2010 | EP |
2 356 375 | May 2001 | GB |
58-164368 | Sep 1983 | JP |
59-127781 | Jul 1984 | JP |
63-209370 | Aug 1988 | JP |
01 040371 | Feb 1989 | JP |
02-248264 | Oct 1990 | JP |
02-289368 | Nov 1990 | JP |
03-024972 | Feb 1991 | JP |
03-222588 | Oct 1991 | JP |
04-008063 | Jan 1992 | JP |
4-119338 | Apr 1992 | JP |
05-136998 | Jun 1993 | JP |
06 183033 | Jul 1994 | JP |
06 266514 | Sep 1994 | JP |
06-292005 | Oct 1994 | JP |
6-308632 | Nov 1994 | JP |
06-350888 | Dec 1994 | JP |
08-3076999 | Nov 1996 | JP |
9-138465 | May 1997 | JP |
09 167129 | Jun 1997 | JP |
10-285390 | Oct 1998 | JP |
11-055515 | Feb 1999 | JP |
11 505357 | May 1999 | JP |
11-275359 | Oct 1999 | JP |
2000-050077 | Feb 2000 | JP |
2000-050080 | Feb 2000 | JP |
2000-184270 | Jun 2000 | JP |
2001-160908 | Jun 2001 | JP |
2001-273112 | Oct 2001 | JP |
2002 199221 | Jul 2002 | JP |
2002 247361 | Aug 2002 | JP |
2003-008986 | Jan 2003 | JP |
2001-0037684 | May 2001 | KR |
WO 9734257 | Sep 1997 | WO |
WO 99 53415 | Oct 1999 | WO |
WO 0004492 | Jan 2000 | WO |
WO 0101669 | Jan 2001 | WO |
WO 01031432 | May 2001 | WO |
WO 02078320 | Oct 2002 | WO |
WO 02096651 | Dec 2002 | WO |
WO 02098124 | Dec 2002 | WO |
WO 03071780 | Aug 2003 | WO |
WO 04077816 | Sep 2004 | WO |
WO 05 006200 | Jan 2005 | WO |
Number | Date | Country | |
---|---|---|---|
Parent | 09870537 | May 2001 | US |
Child | 11847894 | US |