In the figures which illustrate by way of example only, embodiments of the present invention,
Remaining pixels 24 of
In contrast,
Another conventional method called bilinear interpolation is illustrated with reference to
In general, input pixels nearest to (x*,y*) are typically chosen for interpolation to yield best results although any four pixels at positions (x1,y1), (x2, Y1), (x1,y2) and (x2,y2) which satisfy the inequalities x1≦x*≦x2 and y1≦y*≦y2 may be used.
First the pixel values P(x*,y2), and P(x*,y1) are interpolated. P(x*, y2) is horizontally interpolated from the known values P(x1, y2) and P(x1, y1). Thus, defining
(α=½ in
P(x*,y2)=αP(x1,y2)+(1−α)P(x2,y2) (1a)
P(x*,y1)=αP(x1,y1)+(1−α)P(x2,y1) (1 b)
Similarly, defining
P(x*,y*) is then calculated as:
P(x*,y*)=βP(x*,y1)+(1−β)P(x*,y2) (1C)
As mentioned, for the case of doubling an image as shown in
Although bilinear interpolation may lead to better results than nearest neighbor methods, jagged edges can still be observed in diagonal lines in linearly interpolated up-scaled images.
Up-scaling and interpolation methods exemplary of embodiments of the present invention, reduce jagged edges and produce more visually appealing, smooth diagonal edges. As such, an exemplary method involves at least two interpolation stages and is generally summarized with the aid of blocks S100 shown in
Steps involved in the formation of intermediate image I in block S102 of
As mentioned, if the number of columns in the input image is Mx, the number of columns MI in the corresponding intermediate image is chosen as 2Mx−1. Similarly, if the number of rows in the input image is Nx, the number of rows NI in the corresponding intermediate image is chosen as 2Nx−1. In block S204, pixels from input image X are mapped to their respective coordinates in the intermediate image I to form image 44. Intermediate pixel I(2i,2j) is set to the value of input pixel X(i,j) where 0≦i<Nx and 0≦j<Mx. The value of pixel 50 of
As will be appreciated, different intermediate image sizes (other than images that are horizontally and vertically scaled by a factor of about 2) may be used. Similarly, other mappings of pixels in the input image to the intermediate image will be apparent to one of ordinary skill.
In one exemplary embodiment, the interpolation in intermediate image buffer 44 may proceed by first selecting an empty buffer location to fill, in block S206. As will become apparent, a sub-image of the intermediate image, containing a two dimensional array of input pixels, of a predetermined size, may be formed around a pixel that is to be formed using edge-directed interpolation. The window may thus be considered a sub-image of the input image. A pixel in the intermediate image that has enough mapped input pixels (i.e., pixels mapped from the input image in block S204) around it to form a window or sub-image of predetermined size may be classified as an interior pixel. As can be appreciated, pixels at or near the top row or left most column, right most column and bottom row may not allow a window to be formed around them. Depending on the window size, pixels proximate the boundary of the image, such as those on the second left most or second rightmost column may also not allow a window of mapped input pixels to be formed around them. Such pixel coordinates that are near the boundary, and do not allow a window of mapped input pixels, of a predetermined size, to be formed around them may be classified as boundary pixels.
Specifically, a window size of 8×12 may be used in an exemplary embodiment. As depicted in
For such a window size, pixels at I(x,y) for which the conditions 0≦x−5<NI, and 0≦x+6<NI, and 0≦y+3<MI, and 0≦y−4<MI are interior pixels. Conversely, pixels for which these four conditions are not satisfied may be classified as boundary pixels. Thus, in block S208, an empty buffer location I(x,y) (representing a pixel to be formed by interpolation) is classified as an interior pixel or a boundary pixel as noted just above.
In block S210, if the selected buffer location is for an interior pixel, then in block S214, the pixel value is formed from mapped input pixels 50′ using a local edge-directed interpolation. On the other hand, if the selected buffer location is for a boundary pixel, its value may be determined using conventional interpolation methods such as bilinear interpolation, either horizontally or vertically in block S212. Of course, the majority of pixels in typical applications will be interior pixels. If more empty image buffer locations exist (S216) that need to be determined, then the process may be repeated for the next empty buffer location at block S206. Blocks 200 may be implemented in software. Software containing processor executable instructions, executing on a computing machine with a processor and memory, may be used to carryout blocks 200. Alternately hardware embodiments may be used. As will become apparent, intermediate image I, need not be fully constructed as described. Accordingly, in hardware embodiments, only a few intermediate pixels near an output pixel location may be interpolated.
Local edge-directed interpolation in block S214 is further illustrated using blocks S300 of
In the depicted embodiment, each window corresponding to a pixel to be formed is unique. In addition, windows corresponding to adjacent pixel positions to be formed are of fixed size and thus overlap. However, in alternate embodiments, the intermediate image may be subdivided into disjoint sub-images or windows, and an interpolated pixel may be formed using the particular sub-image to which it belongs. Thus the sub-image used to interpolate a pixel may not be unique to that pixel. In other embodiments, the sizes of different sub-images may also be different. Many other variations on how sub-images are formed for interpolation will be apparent to those of ordinary skill.
A 4×6 kernel K corresponding to a window (e.g. window 72) is shown as a kernel matrix below.
Now, each intermediate pixel to be formed (or empty image buffer location) in
Pixel 78 (
Pixel 74 (
Pixel 70 (
Once the kernel is determined in block S302, its local edge-orientation is determined in block S304. The local edge-orientation for pixel 70 may be one of the lines shown in
The local edge-orientation is determined using kernel K in blocks S400 depicted in
The method starts with a pixel to be formed 70 (or its corresponding empty image buffer location) and its associated kernel K of suitable size comprising input pixels.
In block S402 the gradients of the kernel are computed. To determine the gradients, a pair of filters Hx, Hy are used to spatially filter (convolve with) kernel K. As shown below exemplary filters Hx and Hy are 2×2 matrices.
The convolution yields two matrices Ix and Iy corresponding to Hx and Hy respectively. In other words,
Ix=K*Hx
Iy=K*Hy
where * denotes the convolution operator.
During the convolution of K and Hx, the elements kij of kernel K are multiplied with the elements of filter Hx and summed to produce the elements Ixij of Ix. Digital convolutions of this type are well known in the art.
Specifically, the top left element Ix11 of Ix is computed using elements of Hx and K as
Ix11=Hx11k11+Hx21k21+Hx12k12+Hx22k22=k11−k21+k12−k22. The second element Ix12 on the top row is similarly computed after first shifting the gradient matrix Hx to the right by one column so that the elements of Hx are superimposed with k11, k12, k21 and k22. The calculation of Ix12 thus proceeds as Ix12=Hx11k12+Hx21k22+Hx12k13+Hx22k33=k12−k22+k13−k33 and so on. After the first row of Ix is computed, Hx is shifted down and to the left, so that its elements align with k21, k22, k31 and k32 and the process continues for the second row.
The computation of Iy proceeds in an identical manner but using Hy instead. The convolution is carried out without zero padding K at the edges. Therefore the resulting sizes of Ix and Iy are 3×5 for a kernel size of 4×6.
In block S404, matrix products of the gradient matrices are computed. Three matrix products Ixx, Iyy and Ixy are computed. Given matrix Ix with elements Ixij (where 1≦i≦3 and 1≦j≦5), elements Ixxij (1≦i≦3 and 1≦j≦5) of Ixx are computed as Ixxij=(Ixij)2. Similarly given matrix Iy with elements Iyij (1≦i≦3 and 1≦j≦5), the elements Iyyij (1≦i≦3 and 1≦j≦5) of matrix Iyy are computed as Iyyij=(Iyij)2. Finally matrix Ixy with elements Ixyij (1≦i≦3, 1≦j≦5) is computed by multiplying corresponding elements Ix and Iy so that the elements of the resulting matrix Ixy are computed as Ixyij=(Ixij)(Iyij). Each of the calculated matrices Ixx, Iyy, Ixy is a 3×5 matrix.
Matrices Ixx, Iyy and Ixy, which are all 3×5, are then low-pass filtered in block S406 using a low-pass filter HLP (a 3×5 matrix with all its elements set to 1 as shown below).
An arbitrary smoothing filter may be used as HLP. Any matrix with all elements being positive valued can serve as a suitable low-pass filter. Using the depicted low-pass filter HLP, filtering simply sums together all elements of the input matrix (i.e., Ixx, Iyy or Ixy).
The filtering operations yield three scalar values Gxx, Gyy and Gxy which represent the sums or averages of the squared gradients Ixxij, Iyyij, Ixyij in matrices Ixx, Iyy and Ixy respectively. One skilled in the art would appreciate that the values Gxx, Gyy and Gxy are elements of the averaged gradient square tensor
The gradient square tensor, (also known as the approximate Hessian matrix or the real part of the boundary square matrix) uses squared (quadratic) gradient terms which do not cancel when averaged. It is thus possible to average or sum gradients of opposite direction which have the same orientation, so that the gradients reinforce each other rather than canceling out. Gradient square tensors are discussed in for example, Lucas J. van Vliet and Piet W. Verbeek, “Estimators for Orientation and Anisotropy in Digitized Images”, Proceedings of the First Annual Conference of the Advanced School for Computing and Imaging, Heijen, Netherlands, May 16-18, pp. 442-450, 1995 and also in Bakker, P., Van Vliet L. J., Verbeek, P. W., “Edge preserving orientation adaptive filtering”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Vol. 1 pp. 540, 1999, the contents of which are both hereby incorporated by reference.
In block S408, the angle of orientation φ is determined. Specifically, smoothed gradient square tensor
λ1=½(P+R) (2a)
λ2=½(P−R) (2b)
where P and R as related as
P=G
xx
+G
yy (2c)
Q=G
xx
−G
yy (2d)
R=√{square root over (Q2+4(Gxy)2)} (2e).
The orientation angle φ is then computed from the largest eigenvalue λ1 as
Orientation angle φ evaluates to a value between −90° and 90°. The direction (or line) closest to the calculated value of φ may be taken as the orientation. For example, in
In block S410, orientation angle φ is further qualified by anisotropy calculation. The anisotropy A0 is preferably defined as
The anisotropy A0 is intended to give a measure of the energy in the orientation relative to the energy perpendicular to the orientation, but need not be defined necessarily as set out in equation (3). For example, the absolute value of a Sobel operator may be used.
For computed angles that are, or close to, integral multiples of 90° (0°, 90° 180°, 270°), the anisotropy tends to be larger than for other orientations. This is because anisotropy varies nonlinearly with the angle. Different threshold values corresponding to different ranges of angles may thus be used to qualify interpolation orientations. For example, a threshold of about 6 may be used for angles in the range of about 30° to 60°. A threshold of about 24 may be used for angles near 14° and a threshold value of about 96 may be used for angles that are less than about 11°.
In the exemplary embodiment, a kernel of size 4×6 is used. The size is somewhat arbitrary and may of course be increased. It should be appreciated that increasing the kernel size (by increasing the associated window size in the intermediate image) would also increase computational complexity. On the other hand, if the size of the kernel is too small, then the number of lines available for interpolation may be too small. That is, the computed angle φ may be wider (or narrower) than the widest (or narrowest) angle available inside the window. In this case, pixels outside of the window may be extrapolated and used. For example, Φ26 (
An example of such an extrapolation (block S416) is shown in
P
R
≈k
26+2(k26−k25)=3k26−2k25 (4a)
Similarly extrapolation function for PL is given as
P
L
≈k
31+2(k31−k32)=3k31−2k32 (4b)
As can be appreciated by an ordinary person skilled in the art, better approximation is achieved by extrapolation when extrapolated points PR, PL are closer to the kernel.
A further qualification criterion may be imposed for the candidate orientation, in block S418. The orientation may be qualified using pixel correlation. To calculate pixel correlation, the absolute difference between two pixels that lie along lines that include the new intermediate pixel is computed (
At the conclusion of blocks S400, the orientation to use in forming a new intermediate pixel 70 (
If a candidate orientation is not suitable (S412, S418), then in block S422 a default orientation may be selected, which may be horizontal for a ‘type 1’ pixel, vertical for a ‘type 2’ pixel and a predetermined diagonals for a ‘type 3’ pixel.
Returning to
As shown in
Referring back to
Blocks S500 of
In block S504, an empty output image buffer location Y(x,y) corresponding to an output pixel, is selected for interpolation.
A pixel Q at Y(x,y) in the output image buffer, may be written as a point in a two dimensional output coordinate system at coordinate (xY, yY). As depicted in
Similarly, any pixel B at I(x,y) in the intermediate image, may be represented as a point at an integer coordinate (xB, yB) of an intermediate coordinate system. Hereinafter, the coordinate of a point such as B in the intermediate coordinate system is denoted BI, at (xB, yB)I or (xBI, yBI) to indicate that the coordinate is in intermediate coordinate system 600. Thus B, is located at (1,1)I.
Thus, the intermediate pixel in the intermediate image buffer location I(0,0), is shown at coordinate (0,0)I in intermediate coordinate system 600. Similarly intermediate pixel I(0, 2) in the intermediate image buffer, is shown at coordinate (0, 2)I in intermediate coordinate system 600.
In order to determine the value of an arbitrary pixel Qy in the output image, from the intermediate image, a corresponding coordinate of a pixel QI in image I is determined. Once the corresponding coordinate is known, the value of pixel QI may be determined from values of known intermediate pixels, proximate QI. The determined value of pixel QI may then be used as the value of QY.
In other words, for each output pixel Q denoted by a point Qy at (xQ, yQ)Y in the output coordinate system 602, coordinates of an associated point QI at intermediate pixel coordinates (xQ,yQ)I in the intermediate coordinate system 600 may be determined. After determining the associated intermediate pixel coordinates in the intermediate coordinate system 600, intermediate pixels in image I proximate or close to QI can be identified and used for interpolation.
For each pixel of the output image having a pixel value at its associated intermediate pixel coordinate in the intermediate image, the pixel value from the intermediate image may be mapped directly to the output pixel Q of the output image. For other pixels of the output image, at least two selected pixels of the intermediate image may be combined to form the output pixel Q of the output image.
A number of associations 604, 606, 608, 610, 612 from output coordinate system 602, to intermediate coordinate system 600 are depicted in
Output pixel Y(0,5) is associated with coordinate (0, 2)I in intermediate coordinate system 600 by association 606. Similarly output pixel Y(1,3) is mapped to coordinate (0.4, 1.2)I by association 604. This mapping of an output pixel Qy at (xQ, yQ)Y or (xQY, yQY) in the output coordinate system into its corresponding pixel in the intermediate coordinate system, QI at (xQI, yQI) is accomplished in block S506. Typically however, input pixels do not map to integer coordinates of the output coordinate system.
Notably, the associated intermediate coordinates (xQI, yQI) in intermediate coordinate system 600, of an output pixel Q, may have non-integer co-ordinate values. That is, xQI or yQI may not be integers. For example, the coordinate of output pixel Qy which is (xQY, yQY)=(1, 3)Y is mapped to coordinate (xQI, yQI)=(0.4, 1.2)I by mapping 604. However, the intermediate image is specified by pixels at integer coordinates of the intermediate coordinate system. Thus intermediate pixels at integer coordinates, that can be used to interpolate output pixel Q, are identified in block S508.
To determine the associated coordinate (xQ, yQ)I in intermediate coordinate system 600 of output pixel Q (at output coordinate system (xQY, yQY)=(1,3)), the values xQY and yQY may be multiplied by a horizontal and vertical co-ordinate transformation factors fx and fy respectively to compute xQI and yQI. fx, fy reflect the horizontal and vertical scaling of the input image to form the scale output image Y. Since output pixel at output coordinates (5,0)Y maps to intermediate coordinates (2,0)I, a horizontal coordinate transformation factor may easily determined as fx=⅖. Similarly since output coordinate (0,5)Y maps to intermediate coordinate (0,2)I, a vertical coordinate transformation factor of fy=⅖ should be used.
In general, a coordinate transformation factor fx in the horizontal dimension, may be determined from the number of intermediate columns MI and the number of output columns My in that dimension, as
f
x=(MI−1)/(My−1).
Similarly, a coordinate transformation factor fy in the horizontal dimension, may be determined from the number of intermediate rows NI and the number of output columns My in that dimension, as
f
y=(NI−1)/(Ny−1).
Thus pixel Q at output image coordinate Y(1,3), maps to (⅖, 6/5)I=(0.4,1.2)I after multiplying each by the appropriate coordinate transformation factor of ⅖. That is, in association 604, intermediate coordinates (xQ,yQ)I for pixel Q are calculated from the coordinate of Qy, which is (1,3)Y as
x
Q
I=(fx)(xQY)=⅖×1.0=0.4
y
Q
I=(fy)(yQY)=⅖×3.0=1.2
Now, from the calculated position (xQI, yQI)=(0.4, 1.2)I, software exemplary of an embodiment of the present invention can easily find the square of known intermediate pixels AIBICIDI (
The interpolation may now proceed with triangle AIBICI. AI, BI and CI are located at to intermediate coordinates (0,1)I, (1,1)I and (0,2)I respectively. Thus, in block S508 pixel values AI=(0,1)I, BI=(1,1)I and CI=(0,2)I from the intermediate image buffer locations I(0,1), I(1,1) and I(0,2), are identified as the pixels to use for interpolation.
In S510, an intermediate pixel value QI is formed from the intermediate pixel values identified in S508, (AI, BI and CI corresponding to intermediate image buffer locations I(0,1), I(1,1) and I(0,2)). Once the intermediate pixel value QI is formed, the corresponding output buffer coordinate (that is QY(1,3)) is populated with the newly determined output pixel value.
If empty output image buffer locations are left (S510) the process repeats at block S504 as shown. At the end of S500, a completely constructed output image should result.
In the depicted embodiment, triangular interpolation is used to interpolate pixel QI at non-integer coordinates. Of course, other suitable interpolation techniques could be used. Triangular interpolation, used in an exemplary of an embodiment of the present invention, is illustrated below. Triangle AIBICI of
(xQ,yQ)I=r(xA,yA)I+s(xB,yB)I+t(xC,yC)I where r+s+t=1 (5a)
Such a coordinate system is called a barycentric coordinate system. Since r+s+t=1, in equation (5a) r can be defined as 1−s−t and thus only s and t need to be calculated. Of course, a person skilled in the art may appreciate that there are many ways to determine the barycentric coordinates of a point in a triangle.
One way of determining barycentric coordinates is illustrated with a simple example shown in
{right arrow over (Q)}={right arrow over (A)}+s({right arrow over (B)}−{right arrow over (A)})+t({right arrow over (C)}−{right arrow over (A)}) (5b)
where t is the perpendicular distance of Q from line
Equation (5b) can then be simplified to yield
{right arrow over (Q)}=(1−s−t){right arrow over (A)}+s{right arrow over (B)}+t{right arrow over (C)} (5c).
The coefficients (r, s, t) where r=1−s−t are known as the barycentric coordinates of Q over triangle ABC.
Given some known quantities A, B, and C at coordinates (xA,yA)I, (xB,yB)I, (xC,yC)I respectively, an unknown quantity Q corresponding to coordinate (xQ,yQ)I may be formed by interpolation from the barycentric coordinate values s and t using the formula
Q=(1−s−t)A+sB+tC (5d).
The unknown value Q may represent intensity, color, texture or some other quantity that is to be determined. In other words, given coordinate (xQ,yQ), defining a line in three dimensions, the value Q is determined from the intersection of the line with the plane defined by the three, three-dimensional points (xA, yA, A), (xB, yB, B) and (xC,yC,C). In the present example the quantity of interest is pixel value which may be gray scale or alternately a red, green or blue color component.
As another example,
The values of pixel A′, B′and C′are known. Hence, as δx′=¾ and δy′=¼ it follows that s=δx′−δy′=½; t=δy′=¼ and r=1−s−t=¼; Therefore the interpolated value P′=¼A′+½B′+¼C′ in accordance with equation (5d).
Exemplary embodiments may be realized in hardware or software. A schematic block diagram of an exemplary hardware device 700, exemplary of an embodiment of the present invention, is depicted in
Data interfaces 706, 710 interconnect enhancement block 704 with edge interpolator 708 and triangular interpolator 714 respectively. Interfaces 724, 726 interconnect color-space-conversion (CSC) block 722, diagonal analysis block 728 and Hessian calculator 730 respectively. Edge-orientation selector 736 receives input from diagonal analysis block 728 by way of interface 734, from Hessian calculator 730 by way of interface 732 and sampling-grid generator 738 by way of interface 740, and forwards its output to edge interpolator 708 by way of interface 748.
Triangulation block 744 receives its input from sampling-grid generator 738 and outputs vertex data via interface 746 to be transferred to interpolation block 714. Triangulation block 744 also receives the output of edge-interpolation block 708 through interface 712.
Pixels in RGB or YUV format are fed into the data path 702 and control path 720. CSC block 722 converts RGB values to YUV format and transmits the luma Y to diagonal analysis block 728 and Hessian calculator 730. CSC block 722 may be bypassed if the input is already in YUV format. The luma component (Y) is typically better suited for determination of edges and other line features. In addition, computational complexity may be reduced by limiting calculations to just one component, instead of three.
To interpolate intermediate pixels, Hessian calculation block 730 and diagonal analysis block 728 analyze a kernel made up of input pixels at their respective inputs 726, 724, and compute the orientation angle (blocks S402 to S410), anisotropy (block S410) and other qualification parameters. Hessian calculation block 730 and diagonal analysis block 728 thus perform blocks S400 depicted in
Sampling-grid generator 740 generates the coordinates of an intermediate pixel to be formed in accordance with S300 of
In operation, the intermediate pixel values may be calculated as needed on per output pixel basis. Unlike in some software embodiments described above, no intermediate buffer may be required in the exemplary hardware embodiment depicted in
For example, in one specific hardware embodiment, a coordinate of an output pixel in the output coordinate system, may be used to generate its corresponding coordinate in the intermediate coordinate system by sampling-grid generator 738. The output coordinate may be divided by the horizontal and vertical scaling factors as noted above to determine the corresponding intermediate coordinate. The intermediate coordinate may then be used to easily identify a square defined by four mapped input pixels in the intermediate image, enclosing the output image coordinate by sampling-grid generator 738.
The coordinates of three interpolated pixels PL, PC, PR (
The values of the three intermediate pixels may then be formed by interpolating within the square using edge-orientation selector 736 and edge-interpolation block 708. Edge-orientation selector 736 through its output interface 748 may provide the orientation information to edge-interpolation block 708. Edge-interpolation block 708 may then determine each of interpolated pixels PL, PC, PR, by interpolating along the selected orientation.
The coordinates of the interpolated pixels in the intermediate coordinate system may also be used to partition the square into six triangles by triangulation block 744. A triangle enclosing the output pixel coordinate in the intermediate coordinate system is subsequently used by interpolation block 714 to interpolate the output pixel value. Using vertex pixel values of the selected triangle fed from edge-interpolation block 708, and the barycentric coordinates fed from triangulation block 744, interpolation block 714 interpolates the output pixel, effectively carrying out block S510 in
Depending on their relative proximity to, coordinates of output pixels in the intermediate coordinate system, some intermediate pixels may never need to be formed, as they may not be needed to compute the output pixels.
Embodiments of the present invention are advantageous as they combine a two-stage process to interpolate pixels at arbitrary resolutions. In the first interpolation stage, intermediate pixels between input image pixels are formed by interpolating pixels of the input image, to form an intermediate image. In a second stage, triangular interpolation is used to determine output pixel values at arbitrary coordinates in the intermediate image. The new intermediate pixels formed by the first interpolation stage may be relatively sparse, which permits the use of high quality nonlinear interpolation methods without too much computational cost. The preferred first stage interpolation is an edge-directed interpolation method that reduces the appearance of jagged edges.
The output pixels are then computed during the second interpolation stage using triangular interpolation which is well known and computationally efficient.
Advantageously, embodiments of the present invention may be used for common input image sizes such as 720×486, 720×576, 720×240, 720×288, 704×480, 704×240, 1280×720, 1920×1080, 1920×1088, 1920×540, 1920×544, 1440×1080, 1440×540 and the like. Similarly, as many different scaling ratios including non-integer ratios may be accommodated, common output sizes such as 1920×1080, 1280×1024, 1024×768, 1280×768, 1440×1050, 1920×1200, 1680×1050, 2048×1200, 1280×720 and the like can be achieved using embodiments of the present invention.
In alternate embodiments, instead of using the edge-directed interpolation described with reference to
In other alternate embodiments, bilinear interpolation may be used instead of triangular interpolation. For example, the square AIBICIDI (
Other interpolation methods such as bi-cubic interpolation or nonlinear interpolation methods may also be used to improve the quality of interpolation usually at some computational cost.
In other alternate embodiments, multiple stages of the edge-directed interpolation step may be utilized. As may be appreciated, the intermediate image size may double with each stage of edge-directed interpolation. Accordingly, after n stages of edge-directed interpolation, the size of the intermediate image may be 2n times the size of the input image. Embodiments with multiple stages of the edge-directed interpolation step may be particularly suited for applications in which a large up-scaling ratio is required.
As will be appreciated, image scaling hardware device 700 may form part of electronic devices such as television sets, computers, wireless handhelds and displays including computer monitors, projectors, printers, video game terminals and the like.
Of course, the above described embodiments, are intended to be illustrative only and in no way limiting. The described embodiments of carrying out the invention are susceptible to many modifications of form, arrangement of parts, details and order of operation. The invention, rather, is intended to encompass all such modification within its scope, as defined by the claims.