The present invention relates to a method and apparatus for scaling digital images and, more particularly, a method and apparatus for scaling images in two dimensions.
It is often desired to scale an image to an arbitrary size. It is common practice when scaling to do so independently in the horizontal and vertical directions. The images produced using such an approach, however, tend to suffer from poor image quality including jagged edges, commonly known as “stair-stepping”.
Directional interpolation improves image quality by reducing stair-stepping when scaling an image. Algorithms incorporating directional interpolation techniques analyze the local brightness-gradient characteristics of the image and perform interpolation based on those characteristics.
U.S. Pat. No. 5,579,053 to Pandel discloses a method for raster conversion by interpolating in the direction of minimum change in brightness value between a pair of points in different raster lines fixed by a perpendicular interpolation line.
U.S. Pat. No. 6,219,464 B1 to Greggain et al. discloses a method and apparatus to generate a target pixel positioned between two lines of input source data. The Greggain method includes comparing pixels of different lines of the source data in a region surrounding the target pixel and then obtaining the target pixel along a direction that has the minimum change in brightness value between a pair of pixels and a horizontal direction.
The Pandel and Greggain methods have associated disadvantages. Designed for use solely with interlaced video images, both Pandel and Greggain methods select a minimum-brightness direction for interpolation from a set of directions which do not include the horizontal direction, a basic direction for any image.
The Greggain method produces an output pixel by interpolating in the horizontal direction as a final step. This is not optimal since the horizontal direction is in general not orthogonal to the direction of minimum brightness change. Furthermore, the Greggain method, like that of Pandel, utilizes only liner interpolation techniques for producing the output pixel. Linear interpolation, acceptable for use when interpolating in the minimum-brightness change direction due to the low spatial frequencies present along that direction, is sub-optimal when interpolating in other directions due to the higher spatial frequencies of brightness change present in those directions. In the case of graphic image sources, the Pandel and Greggain methods tend to cause text overshoot or ringing that will adversely impact image quality.
We describe method and apparatus for two-dimensional image scaling. The method includes determining a sub-image including an array of input pixels and evaluating a brightness for each input pixel contained within the sub-image. And the method includes determining a number of brightness levels associated with the sub-image, selecting at least one of a plurality of replication techniques responsive to the number of brightness levels, and applying the selected replication technique to the sub-image.
The present invention is implemented in a display system 30 as shown in
The image source 32 is a CPU or other device capable of generating image data directly, and of receiving image data from yet other sources such as I/O device 34. The image source 32 operates to place image data 33 and image-processing commands 35 in input memory 36, which is also accessible by the display processor 38. The display processor produces a scaled output image 39 in output memory 40, which is also accessible by the pixel display 42. The pixel display retrieves the output image 39 present in the output memory 40 and displays it.
The display processor 38 is shown in further detail in
The method of the invention is to produce as output a scaled version of an input pixel data source in which the processing performed is determined by the type of the relevant local input data, namely either graphics-generated data or video (natural image) data.
For each target output pixel, the type of the relevant source data is determined by examination of the brightness levels of four pixels of the input data that are geometrically most closely associated with the target output pixel. Video-source data is assumed in the case the pixels exhibit four distinct brightness levels, while graphics-generated data is generally assumed otherwise. The invention uses a video-optimized scaling procedure in the case of video-source data, or selects one of five graphics-optimized scaling procedures to be applied in the case of graphics-source data.
For video-source data, the invention utilizes a directional interpolation technique, wherein intermediate pixels that are produced in the process are aligned perpendicularly to the direction of minimum brightness level change. The technique additionally makes use of polyphase finite impulse response (FIR) filtering in the production of the output target pixel from the intermediate pixels.
The method of the invention applies to both black and white (gray scale) images, as well as to color images. In the former case, it is the gray level of the pixel that is considered the brightness level. In the latter case, RGB or YUV representations are commonly employed, although other representations come within the scope of the invention. For the RGB representation, the individual color component intensity level (red, blue or green) may be used as the brightness level; for the YUV representation, the Y component value is used for the brightness level.
In many of the actions performed in the method, comparisons are carried out to determine whether the brightness levels of two particular pixels are equal to each other, or to determine the number of distinct brightness levels exhibited by some number of pixels. Such comparisons may be carried out by first taking the difference of two brightness level values, and then determining whether the result is zero. Since brightness level values are subject to the influence of noise and other non-ideal factors, a range of result values that are close to zero but not necessarily exactly equal to zero, may optionally be accepted as being functionally equivalent to zero in the procedure just outlined.
As discussed in more detail below, the existence of certain patterns of brightness levels in the four center pixels is sufficient in and of itself to specify the computational procedure to be employed. In the general case, however, the patterns of brightness levels present in various of the input pixels surrounding the center region must additionally be taken into consideration to determine the computational procedure to be carried out.
The brightness levels of certain of the input pixels in as large an area as a 4×6 sub-image portion of the input pixel data set may need to be examined in order to perform the method of the invention. Such a sub-image has the target output pixel Pyx and the center four input pixels at its center, as shown in
The input pixel source data, of which the sub-image shown in
The input pixels of the sub-image region are identified as shown in
As an example of the latter case, assuming target output pixel Pyx was located such that input pixel P12 had absolute coordinates (0,0), then the first row of pixels in
P00=P01=P02=P10=P11=P12;
P20=P21=P22;
P30=P31=P32;
P03=P13;
P04=P14;
P05=P15
That is, the brightness level of pixel P12 of the input data would be the value utilized for the brightness levels of created pixels P00, P01, P03, P10 and P11; the brightness level of pixel P22 of the input data would be the value utilized for the brightness levels of created pixels P20 and P21; and so forth.
With the above understood,
Also as discussed previously, instrumental to the invention is the determination of the number of brightness levels exhibited by the four input pixels P12, P13, P22 and P23 of the center region surrounding the target output pixel Pyx. In blocks 120, 140 and 150 of
In the latter case, a selection is made of one of: a Default Pixel Replication algorithm, a Horizontal Pixel Replication algorithm, a Vertical Pixel Replication algorithm, a Diagonal Pixel Replication algorithm, or an Anti-diagonal Pixel Replication algorithm. The actions carried out in the case of each of these algorithms are discussed below.
In the case of a single brightness level being present among all four center pixels (Y branch from 120; refer to
The case of two brightness levels being present among the four center pixels (Y branch from 140) can occur in two ways. The first (Y branch from 300) covers the cases where two of the center pixels exhibit a first brightness level, and the two other center pixels exhibit a second brightness level (a “2:2 combination”). The second (N branch from 300) covers the cases where just one of the center pixels exhibits a first brightness level, and the three other center pixels exhibit a second brightness level (a “1:3 combination”).
The cases of two brightness levels among the center pixels being in 2:2 combinations are illustrated in
The cases of two brightness levels among the center pixels being in 1:3 combinations are illustrated in
In the cases of the pixel patterns illustrated by
In the case of the pixel pattern illustrated by
In the case of the pixel pattern illustrated by
In the case of the pixel pattern illustrated by
In the case of all other 1:3 combination situations not covered by
The case of three brightness levels being present among the four center pixels (Y branch from 150) implies that two of the pixels have a first brightness level, the third pixel has a second brightness level, and the fourth pixel has a third brightness level. There are four basic ways in which the possible pixel patterns can occur: the two same-level pixels can be aligned either horizontally, vertically, diagonally or anti-diagonally. Refer to
For all other cases of three brightness levels being present among the four center pixels that are not addressed by the cases covered by
The case of four brightness levels being present among the four center pixels (N branch from 150) implies that each of the pixels has a distinct brightness level. In this case, the Directional Interpolation algorithm (160) is employed to produce the target output pixel, as already discussed.
Example embodiments of each of the algorithms referred to above are now specified in more detail.
The Directional Interpolation algorithm consists of seven actions (blocks 160, 170, 180, 190, 200, 210 and 220 of
In block 160, a weighted sum of the absolute values of the differences of the brightness levels of various pairs of pixels in the sub-image is computed to determine the brightness gradient for each of several directions. In an embodiment of the invention, brightness gradients are computed for four directions: 0 degrees, 45 degrees, 90 degrees and 135 degrees relative to the horizontal (X) axis. In another embodiment of the invention, brightness gradients are computed for four additional directions: 26.6 degrees, 63.4 degrees, 116.6 degrees and 153.4 degrees.
In block 170, a determination is made of which of the brightness gradients has the minimum relative value.
In block 180, four line segments are determined, oriented in the XX direction and passing through the four center pixels as well as other pixels of the sub-image, depending upon which direction has the minimum gradient. In
In block 190, the direction perpendicular to XX is identified; in
In block 200, four intermediate pixels are determined, each located at the intersection of the XX-directed lines “0”, “1”, “2” and “3”, and the YY-directed line passing through Pyx. In
In block 210, the brightness level of each of the intermediate pixels is computed using linear interpolation, based on the brightness levels of the two nearest pixels of the sub-image through which the associated XX-directed line segment passes. In
In block 220, the brightness level of the target output pixel Pyx is determined from the brightness levels of the four intermediate pixels through use of polyphase FIR filtering. A person of reasonable skill in the art should recognize that other filtering techniques might be employed in alternative embodiments of the invention.
In more detail regarding block 160, the horizontal (0 degree) gradient weighted sum is computed as follows (refer to
Sh=w0|P11−P12|+w1|P12−P13+w2|P13−P14|+w3|P21−P22|+w4|P22−P23|+w5|P23−P24|
The vertical (90 degree) gradient weighted sum is computed as follows:
Sv=w0|P02−P12|+w1|P12−P22|+w2|P22−P32|+w3|P03−P13|+w4|P13−P23|+w5|P23−P33|
The diagonal (135 degree) gradient weighted sum is computed as follows:
Sd=u0|P01−P12|+u1|P12−P23+u2|P23−P34|+u3|P11−P22|+u4|P22−P33|+u5|P02−P13|+u6|P13−P24|
The anti-diagonal (45 degree) gradient weighted sum is computed as follows:
Sa=u0|P04−P13|+u1|P13−P22+u2|P22−P31|+u3|P03−P12|+u4|P12−P21|+u5|P14−P23|+u6−P23−P32|
In the above, the weights wi and ui are a function of the position of Pyx, the target output pixel.
The fifth (153.4 degree, “diagonal-1”) gradient weighted sum is computed as follows:
S1d=t0|P02−P14|+t1|P01−P13|+t2|P13−P25|+t3|P12−P24|+t4|P11−P23|+t5|P10−P22|+t6|P22−P34|+t7|P21−P33|
The sixth (26.6 degree, “anti-diagonal-1”) gradient weighted sum is computed as follows:
S1at0|P03−P11+t1|P04−P12|+t2|P12−P20|+t3|P13−P21|+t4|P14−P22+t5|P15−P23|+t6P23−P31|+t7|P24−P33|
The seventh (116.6 degree, “diagonal-2”) gradient weighted sum is computed as follows:
S2d=V0|P00−P21|+V1|P11−P32|=v2|P01−P22|+V3|P12−P33|+V4|P02−P23|+v5|P13−P34|+v6|P04−P25|+v7|P14−P35|
The eighth (63.4 degree, “anti-diagonal-2”) gradient weighted sum is computed as follows:
S2a=v0|P05−P24|+v1|P14−P33|+v2|P04−P23|+v3|P13−P32|+v4|P03−P22|+v5|P12−P31|+v6|P02−P21|+v7|P11−P30|
In the above, the weights vi and ti are a function of the position of Pyx, the target output pixel.
In more detail regarding 220, a FIR polyphase filter having the number of phases (denoted by N) equal to 64 is used in a preferred embodiment. In utilizing such a filter, the phase of the variable of interest must be determined. By definition, the phase of y is given as the integer part of (N*y); that is,
phase(y)=int(N*y)
where the variable of interest (y) is constrained to have a value greater than or equal to 0, and less than 1. The phase of y is therefore an integer in the range from 0 to (N−1).
Filtering the brightness levels of four intermediate pixels to produce the brightness level for target output pixel Pyx implies the use of four sets of coefficients for the polyphase filter, the values of each set being a function of the phase of the variable of interest. Table 1 lists the coefficients used in a preferred embodiment of the polyphase filter.
In more detail regarding the other actions performed in an embodiment of the Directional Interpolation algorithm, once a determination has been made regarding which of the four (or possibly eight) directions has the minimum brightness gradient, an appropriate computational procedure is carried out embodying blocks 180 through 220, as explained in detail below.
In the following, note that coef0, coef1, coef2, and coef3 are the coefficients used in the implementation of the polyphase filter and are functions of the phase of the identified variable of interest (refer to Table 1). Additionally note that the mathematical function fract is referred to, which represents taking the fraction portion of the argument.
When the minimum brightness gradient is in the horizontal (0 degree) direction:
x0=(P02+(x*(P03−P02)));
x1=(P12+(x*(P13−P12)));
x2=(P22+(x*(P23−P22)));
x3=(P32+(x*(P33−P32)));
Pyx=((coef0(phase(y))*x0)+(coef1(phase(y))*x1)+(coef2(phase(y))*x2)+(coef3(phase(y))*x3))
When the minimum brightness gradient is in the vertical (90 degree) direction:
x0=(P11+(y*(P21−P11)));
x1=(P12+(y*(P22−P12)));
x2=(P13+(y*(P23−P13)));
x3=(P14+(y*(P24−P14)));
Pyx=((coef0(phase(x))*x0)+(coef1(phase(x))*x1)+(coef2(phase(x))*x2)+(coef3(phase(x))*x3))
When the minimum brightness gradient is in the diagonal (135 degree) direction:
x1=(P12+((x+y)*((P23−P12)/2)));
if x≦Y,
x3=(P03+((x+y)*(P14−P03)/2));
if (x+y)<1,
x0=(P22+((1−x−y)*(P11−P22)/2));
x2=(P31+((1−x−y)*(P20−P31)/2));
else
x0=(P22+((x+y−1)*(P33−P22)/2));
x2=(P31+((x+y−1)*(P42−P31)/2));
Pyx=((coef0(phase(x−y))*x0)+(coef1(phase(x−y))*x1)+(coef2(phase(x−y))*x2)+(coef3(phase(x−y))*x3));
else
x3=(P21+((x+y)*(P32−P21)/2));
if(x+y)<1
x0=(P13+((1−x−y)*(P02−P13)/2));
x2=(P22+((1−x−y)*(P11−P22)/2));
else
x0=(P13+((x+y−1)*(P24−P13)/2));
x2=(P22+((x+y−1)*(P33−P22)/2));
Pyx=((coef0(phase(y−x))*x0)+(coef1(phase(y−x))*x1)+(coef2(phase(y−x))*x2)+(coef3(phase(y−x))*x3))
When the minimum brightness gradient is in the anti-diagonal (45 degree) direction:
if (x+y)<1,
x0=(P02+((1−x+y)*(P11−P02)/2));
x2=(P13+((1−x+y)*(P22−P13)/2));
if(x>y),
x1=(P12+((x−y)*(P03−P12)/2));
x3=(P03+((x−y)*(P14−P03)/2));
else
x1=(P12+((y−x)*(P21−P12)/2));
x3=(P03+((y−x)*(P14−P03)/2));
Pyx=((coef0(phase(x+y))*x0)+(coef1(phase(x+y))*x1)+(coef2(phase(x+y))*x2)+(coef3(phase(x+y))*x3));
else
x1=(P13+((1−x+y)*(P22−P13)/2));
x3=(P24+((1−x+y)*(P33−P24)/2));
if(x>y),
x0=(P12+((x−y)*(P03−P12)/2));
x2=(P03+((x−y)*(P14−P03)/2));
else
x0=(P12+((y−x)*(P21−P12)/2));
x2=(P03+((y−x)*(P14−P03)/2));
Pyx=((coef0(phase(x+y−1))*x0)+(coef1(phase(x+y−1))*x1)+(coef2(phase(x+y−1))*x2)+(coef3(phase(x+y−1))*x3))
When the minimum brightness gradient is in the diagonal-1 (153.4 degree) direction:
a=fract(1−x+(2*y));
x1=(P12+((y+(2*x))*(P24−P12)/5));
x2(P23+(((3−y)−(2−x))*(P11−P23)/5));
if((2*y)−x)<1,
if ((2*x)+y)<2,
x0=(P13+((2−(2*x)−y)*(P01−P13)/5));
else
x0=(P13+((2−(2*x)−y)*(P25−P13) 5));
if x>(2*y),
x3=(P14+((4−(2*x)−y)*(P02−P14) 5));
Pyx=((coef0(phase(a))−x3)+(coef1(phase(a))*x0)+(coef2(phase(a))*x1)+(coef3(phase(a))*x2));
else
if((2*x)+y)≧1,
x3=(P22+(((2*x)+y−1)*(P34−P22) 5));
Else
x3=(P22+((1−(2*x)−y)*(P10−P22)/5));
Pyx=((coef0(phase(a))*x0)+(coef1(phase(a))*x1)+(coef2(phase(a))*x2)+(coef3(phase(a))*x3));
else
x0=(P21+((1+(2*x)+y)*(P33−P21)/5));
if(y+(2*x))>1×
x3=(P22+((y+(2*x)−1)*(P34−P22)/5));
else
x3=(P22+((1−(2*x)−y)*(P10−P22)/5));
Pyx=((coef0(phase(a))*x1)+(coef1(phase(a))*x2)+(coef2(phase(a))*x3)+(coef3(phase(a))*x0))
When the minimum brightness gradient is in the anti-diagonal-1 (26.6 degree) direction:
a=fract(x+(2*y));
x1=(P13+((2+y(2*x))*(P21−P13)/5));
x2=(P22+((1−y+(2*x))*(P14−P22)/5));
if((2*y)+x)<2,
if y<(2*x),
x0=(P12+(((2*x)−y)*(P04−P12)/5));
else
x0=(P12+(((2*x)−y)*(P20−P12)/5));
if(x+(2*y))<1,
x3=(P11+((2+(2*x)−y)*(P03−P11)/5));
Pyx=((coef0(phase(a))*x3)+(coef1(phase(a))*x0)+(coef2(phase(a))*x1)+(coef3(phase(a))*x2));
else
if(y+1)<(2*x),
x3=(P23+(((2*x)−y)*(P31−P23) 5));
else
x3=(P23+(((2*x)−y)*(P15−P23)5));
Pyx=((coef0(phase(a))*x0)+(coef1(phase(a))*x1)+(coef2(phase(a))*x2)+(coef3(phase(a))*x3))
else
x0=(P24+((3−(2*x)+y)*(P32−P24)/5));
if (y+1)<(2*x),
x3=(P23+(((2*x)−y)*(P31−P23)/5));
else
x3(P23+(((2*x)−y)*(P15−P23)/5));
Pyx=((coef0(phase(a))*x1)+(coef1(phase(a))*x2)+(coef2(phase(a))*x3)+(coef3(phase(a))*x0))
When the minimum brightness gradient is in the diagonal-2 (116.6 degree) direction:
a=fract((1−y)+(2*x));
x1=(P12+((x+(2*y))*(P33−P12)/5));
x2=(P23+((3−x−(2*y))*(P02−P23)/5));
if((2*x)−y)<1,
x0=(P22+((2−(2*y)−x)*(P01−P22)/5));
if y>(2*x),
x3=(P11+(((4−x−(2*y))*(P32−P11)/5));
Pyx=((coef0(phase(a))*x3)+(coef1(phase(a))*x0)+(coef2(phase(a))*x1)+(coef3(phase(a))*x2));
else
x3=(P13+(((2*y)+x−1)*(P34−P13)/5));
Pyx=((coef0(phase(a))*x0)+(coef1(phase(a))*x1)+(coef2(phase(a))*x2)+(coef3(phase(a))*x3));
else
x0=(P03+((1+(2*x)+y)*(P24−P03)/5));
x3=(P13+((y+(2*x)−1)*(P34−P13)/5));
Pyx=((coef0(phase(a))*x1)+(coef1(phase(a))*x2)+(coef2(phase(a))*x3)+(coef3(phase(a))*x0))
When the minimum brightness gradient is in the anti-diagonal-2 (63.4 degree) direction:
a=fract(y+(2*x));
x1=(P22+((2+x−(2*y))*(P03−P22)/5);
x2=(P13+((1−x+(2*y))*(P32−P13)/5));
if(y+(2*x))<2,
x0=(P12+(((2*y)−x)*(P31−P12)/5));
if ((2*x)+y)<1,
x3=(P02+((2+(2*y)−x)*(P21−P02)/5));
Pyx=((coef0(phase(a))*x3)+(coef1(phase(a))*x0)+(coef2(phase(a))*x1)+(coef3(phase(a))*x2));
else
x3=(P23+((1−(2*y)+x)*(P04−P23)/5));
Pyx=((coef0(phase(a))*x0)+(coef1(phase(a))*x1)+(coef2(phase(a))*x2)+(coef3(phase(a))*x3));
else
x0=(P33+(3+x−(2*y))*(P14−P33)/5));
x3=(P23+((1+x−(2*y))*(P04−P23)/5));
Pyx=((coef0(phase(a))*x1)+(coef1(phase(a))*x2)+(coef2(phase(a))*x3)+(coef3(phase(a))*x0))
In the following, as well as in the later descriptions of the other Pixel Replication algorithms, in Height and outHeight represent the vertical extent of the input and output images respectively; in Width and outWidth represent the horizontal extent of the input and output images respectively.
hLambda=(inWidth/outWidth);
vLambda=(inHeight/outHeight);
If y<=((1−vLambda)/2),
If x<((1−hLambda)/2),Pyx=P12;
Else if x>((1+hLambda)/2),Pyx=P13;
Else Pyx=P13+((P14−P13)*((x−((1−hLambda)/2))/hLambda));
Else if y>=((1+vLambda)/2),
If x<((1−hLambda)/2),Pyx=P22;
Else if x>((1+hLambda)/2),Pyx=P23;
Else Pyx=P22+((P23−P22)*((x−((1−hLambda)/2))/hLambda));
Else
If x<((1−hLambda)/2),
Pyx=P12+((P22−P12)*((y−((1−vLambda)/2))/vLambda));
Else if x>((1+hLambda)/2),
Pyx=P13+((P23−P13)*((y−((1−vLambda)/2))/vLambda));
Else
Px1=P12+((P13−P12)*((x−((1−hLambda)/2))/hLambda));
Px2=P22+((P23−P22)*((x−((1−hLambda)/2))/hLambda));
Pyx=Px1+((Px2−Px1)*((y−((1−vLambda)/2))/vLambda))
If y[((1−vLambda)/2),
If x<0.5,Pyx=P12;
Else Pyx=P13;
Else if yμ((1+vLambda)/2),
If x<0.5,Pyx=P22;
Else Pyx=P23;
Else
If x<0.5,
Pyx=P12+((P22−P12)*((y−((1−vLambda)/2))/vLambda));
Else Pyx=P13+((P23−P13)*((y((1−vLambda)/2))/vLambda))
If x[((1−vLambda)/2),
If y<0.5,Pyx=P12;
Else Pyx=P22;
Else if xμ((1+hLambda)/2),
If y<0.5,Pyx=P13;
Else Pyx=P23;
Else
If y<0.5,
Pyx=P12+((P13−P12)*((x−((1−hLambda)/2))/hLambda));
Else Pyx=P22+((P23−P22)*((x−((1−hLambda)/2))/hLambda))
If(1−x+y)[((1−Lambda)/2),Pyx=P13;
Else if(1−x+y)<((1+Lambda)/2),
If(x+y)<1,
Pyx=P13+((P12−P13)*((1−x+y−((1−Lambda)/2))/Lambda));
Else
Pyx=P13+((P23−P13)*((1−x+y−((1−Lambda)/2))/Lambda));
Else if(1−x+y)[((3−Lambda)/2),
If(x+y)<1,Pyx=P12;
Else Pyx=P23;
Else if(x+y)<((3+Lambda)/2),
If(x+y)<1,
Pyx=P12+((P22−P12)*((1−x+y−((3−Lambda)/2))/Lambda));
Else
Pyx=P23+((P22−P23)*((1−x+y−((3−Lambda)/2))/Lambda));
Else Pyx=P22
If(x+y)[((1−Lambda)/2),Pyx=P12;
Else if(x+y)<((1+Lambda)/2),
If(x>y),
Pyx=P12+((P13−P12)*((x+y−((1−Lambda)/2))/Lambda));
Else
Pyx=P12+((P22−P12)*((x+y−((1−Lambda)/2))/Lambda));
Else if(x+y)<((3−Lambda)/2),
If x>y,Pyx=P13;
Else Pyx=P22;
Else if (x+y)<((3+Lambda)/2),
If x>y,
Pyx=P22+((P23−P22)*((x+y−((3−Lambda)/2))/Lambda));
Else Pyx=P13+((P23−P13)*((x+y−((3−Lambda)/2))/Lambda));
Else Pyx=P23
While embodiments of the invention has been shown and described, it will be apparent to those skilled in the art that many other changes and modifications may be made without departing from the invention in its broader aspects. It is therefore intended that the appended claims cover all such changes and modifications coming within the scope and spirit of the accompanying claims.
Number | Name | Date | Kind |
---|---|---|---|
4680720 | Yoshii et al. | Jul 1987 | A |
5019903 | Dougall et al. | May 1991 | A |
5054100 | Tai | Oct 1991 | A |
5257326 | Ozawa et al. | Oct 1993 | A |
5296941 | Izawa et al. | Mar 1994 | A |
5511137 | Okada | Apr 1996 | A |
5513281 | Yamashita et al. | Apr 1996 | A |
5526020 | Campanelli et al. | Jun 1996 | A |
5579053 | Pandel | Nov 1996 | A |
5703968 | Kuwahara et al. | Dec 1997 | A |
5832143 | Suga et al. | Nov 1998 | A |
5852470 | Kondo et al. | Dec 1998 | A |
5917963 | Miyake | Jun 1999 | A |
5946044 | Kondo et al. | Aug 1999 | A |
5966183 | Kondo et al. | Oct 1999 | A |
5991463 | Greggain et al. | Nov 1999 | A |
6005989 | Frederic | Dec 1999 | A |
6009213 | Miyake | Dec 1999 | A |
6157749 | Miyake | Dec 2000 | A |
6219464 | Greggain et al. | Apr 2001 | B1 |
6266454 | Kondo | Jul 2001 | B1 |
6324309 | Tokuyama et al. | Nov 2001 | B1 |
6408109 | Silver et al. | Jun 2002 | B1 |
6463178 | Kondo et al. | Oct 2002 | B1 |
6690842 | Silver et al. | Feb 2004 | B1 |
20010035969 | Kishimoto | Nov 2001 | A1 |
20020076121 | Shimizu et al. | Jun 2002 | A1 |
20030007702 | Aoyama et al. | Jan 2003 | A1 |