1. Field of the Invention
This invention relates generally to the field of computer graphics and, more particularly, to a graphics system configured to compute blending functions for overlapped video projectors.
2. Description of the Related Art
For large format visualization applications, especially for group presentation, or for immersive virtual reality display, it is often desirable to use several video projectors whose projected images tile a display surface. The surface may be curved or otherwise non-flat.
A major requirement for many of these visualization applications is that there be no seam or visual discontinuity between the regions displayed by the respective projectors. This is quite difficult to achieve, because of unavoidable differences in the brightness, color temperature, and other characteristics of the projectors, and because of the human visual system's sensitivity to visual artifacts having high spatial frequency, such as abrupt boundaries.
To overcome these problems, the projected areas are often overlapped to some extent (e.g., 10 to 25% per linear dimension) and a gradual transition is accomplished between the content displayed by one projector and its neighboring projector(s). Overlapping the projected areas requires further processing of the video content displayed by each projector, so that the overlap areas are not brighter than the non-overlapped areas. (The brightness perceived in an overlap area is an optical sum of the brightness profile of each projector hitting the overlap area.)
Typically, within an overlap area, the video intensity of each projector will be gradually diminished as it approaches the boundary of its region. The contribution of one projector will ramp down as the contribution from its neighboring projector ramps up. Since tiling can be done both horizontally and vertically, overlap areas may occur on the sides, top and bottom of an image projected by a single projector. The processing of video to accomplish this ramping is commonly known as “edge blending.”
An installation with three projectors arranged to project onto a flat surface is shown in
Edge blending overlaps usually consume a significant percentage of the screen area so that the transitions occur with a low spatial frequency. Human vision is comparatively insensitive to low spatial-frequency artifacts.
Edge blending may be accomplished by weighting digital video RGB values provided to each of the projectors. The weights may be chosen to achieve (or approximate) the condition of uniform brightness over the whole visual field (i.e., the union of all regions) as suggested in
Nonuniformities in perceived intensity may be induced by means other than region overlap. For example, consider the installation shown in
Therefore, there exists a need for systems and methods capable of compensating for the nonuniformity of perceived brightness in display systems composed of multiple overlapping projector images.
A system for correcting the intensities of pixels supplied to a projector. An image generated by the projector has a number of regions formed by the overlapping of the image with one or more other images generated by one or more other projectors. The system includes: a first unit configured to generate a horizontal scaling value; a second unit configured to generate a vertical scaling value; a first multiplier configured to multiply the horizontal scaling value and the vertical scaling value to obtain a scaling coefficient, and a set of one or more additional multipliers configured to multiply components of an input pixel by the scaling coefficient to determine components for an output pixel. The first unit and second unit compute their respective scaling values in a way that allows for regions whose boundaries non-aligned in the vertical direction.
The first unit is configured to compute an address U in the address space of a horizontal weight table. The address U as a function of horizontal pixel index I is piecewise linear. The first unit is configured to access stored values from the horizontal weight table using the address U and compute the horizontal scaling value based on the accessed values.
Similarly, the second unit is configured to compute an address V in the address space of a vertical weight table. The address V as a function of vertical pixel index J is piecewise linear. The second unit is configured to access stored values from the vertical weight table using the address V and compute the vertical scaling value based on the accessed values.
Furthermore, the first unit is configured to vary the addresses U0, U1 and U2 that bound the linear sections of the piecewise linear function as a function of the vertical pixel position. For example, the first unit may vary the addresses U0, U1 and U2 by adding corresponding horizontal step values to the addresses U0, U1 and U2 once per horizontal line.
A better understanding of the present invention can be obtained when the following detailed description is considered in conjunction with the following drawings, in which:
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. Note, the headings are for organizational purposes only and are not meant to be used to limit or interpret the description or claims. Furthermore, note that the word “may” is used throughout this application in a permissive sense (i.e., having the potential to, being able to), not a mandatory sense (i.e., must).” The term “include”, and derivations thereof, mean “including, but not limited to”. The term “connected” means “directly or indirectly connected”, and the term “coupled” means “directly or indirectly connected”.
In one set of embodiments, a compensation unit may be configured to receive a video stream and scale the RGB values of pixels in the video stream. The video stream conforms to a video raster having horizontal resolution RX and vertical resolution RY. The scaled pixels form an output video stream which is used to drive a projector. The projector generates a time-varying image on a display surface in response to the output stream (or an analog video signal derived from the output stream). Under somewhat idealized conditions, the generated image may resemble image 200 as suggested in
Regions 1-9 are indicated within image 200. Regions 1-4 and 6-9 are regions of overlap with neighboring images generated by other projectors. (For example, visualize four other projectors that generate four neighboring images: one above, one below, one to the right, and one to the left of image 200.) Region 5 is that portion of image 200 which is not shared with any other projector. Observe that the boundaries of the regions 1-9 are aligned with the video raster, i.e., aligned with horizontal and vertical lines of pixels in the video raster. Thus, the boundaries between the regions may be characterized by horizontal pixel positions I1 and I2 and vertical pixel positions J1 and J2.
The compensation unit applies a weight function f(I,J) to each pixel Q(I,J) of the received video stream, where I is a horizontal pixel index and J is a vertical pixel index of the video raster. The weight function f(I,J) may be separable, i.e., may be modeled as a product of horizontal and vertical functions: f(I,J)=fX(I)*fY(J). Thus, the compensation unit may include (or couple to) a horizontal weight table having NX entries and a vertical weight table having NY entries. The horizontal weight table stores values WX(K), K=0, 1, 2, . . . , NX−1. The vertical weight table stores values WY(L), L=0, 1, 2, . . . , NY−1. The values WX(K) and the values WY(L) are programmable by host software. The table sizes NX and NY are integers greater than or equal to four. In one set of embodiments, NX=NY=2T, where T is any integer greater than or equal to two. In one particular embodiment, NX=NY=64.
The bit length LX of the entries in the horizontal weight table and the bit length LY of the entries in the vertical weight table may be integers greater than or equal to two. In one set of embodiments, the lengths LX and LY may take values in the range from 8 to 16. In some embodiments, LX and LY may takes values in the set {10, 11, 12, 13)}.
As the complexity of the horizontal compensation function fX(I) may be different in the intervals [0,I1], [I1,I2] and [I2,RX−1] of the horizontal pixel index I, a user may not necessarily desire to allocate equal amounts of space in the horizontal weight table to each interval. For example, the horizontal compensation function fX(I) may be a raised cosine function on the first interval [0,I1] and a constant (or some slowing moving function) on the second interval [I1,I2]. In this case, a system operator (or an automated configuration agent) may allocate a larger number of entries in the horizontal weight table to represent the function fX(I) on the first interval [0,I1] than on the second interval [I1,I2]. In general, varying amounts of space in the horizontal weight table may be allocated to the respective intervals as follows: table entries in the address range [0,U1] may be used to represent the horizontal compensation function fX(I) on the interval [0,I1]; table entries in the address range [U1,U2] may be used to represent the horizontal compensation function fX(I) on the interval [I1,I2]; and table entries in the address range [U2,NX−1] may be used to represent the horizontal compensation function fX(I) on the interval [I2,RX−1], where 0≦U1, U1≦U2, and U2≦NX−1. Thus, if position in the address space of the horizontal weight table is represented by a continuous variable U, the general mapping between horizontal pixel index I and table position U may be characterized as being continuous and piecewise linear as suggested by
In a similar fashion, the complexity of the vertical compensation function fY(J) may be different in the intervals [0,J1], [J1,J2] and [J2,RY−1] of the vertical pixel index J. Thus, varying amounts of space in the vertical weight table may be allocated to the respective intervals as follows: table entries in the address range [0,V1] may be used to represent the vertical compensation function fY(J) on the interval [0,J1]; table entries in the address range [V1,V2] may be used to represent the vertical compensation function fY(J) on the interval [J1,J2]; and table entries in the address range [V2,NY−1] may be used to represent the vertical compensation function fY(J) on the interval [J2,RY−1], where 0≦V1, V1≦V2, and V2≦NY−1. If position in the address space of the vertical weight table is represented by a continuous variable V, the general mapping between vertical pixel index J and table position V may be characterized as being continuous and piecewise linear as suggested by
For each pixel Q(I,J) of the received video stream, the compensation unit reads table values from the horizontal and vertical weight tables, computes the weight function value f(I,J) from the table values, and scales the color components (R,G,B) of the pixel Q(I,J) according to the relations: R′=R*f(I,J), G′=G*f(I,J), B′=B*f(I,J). The scaled pixel (R′,G′,B′) may be incorporated into the output video stream. The output video stream is used to drive the projector which generates image 200.
The compensation unit receives a pixel clock signal and an Hsync signal (i.e., a horizontal synchronization signal) that corresponds to the input video stream. The compensation unit updates the horizontal position variable U in response to transitions of the pixel clock and updates the vertical position variable V in response to transitions of the Hsync signal. The horizontal position variable U runs through the interval [0,NX−1] in RX steps as the video raster moves across each video line. The vertical position variable V runs through the range [0,NY−1] in RY steps that correspond to the successive lines in each video frame.
The integer [U], i.e., the integer part of U, is used to access the horizontal weight table for values WX([U]) and WX([U]+1). The compensation unit uses the fractional part of U to interpolate between WX([U]) and WX([U]+1) according to the relations
α=U−[U]
fX(I)=(1−α)*WX([U])+α*WX([U+1).
Similarly, the integer [V], i.e., the integer part of V, is used to access the vertical weight table for values WY([V]) and WX([V]+1). The compensation unit uses the fractional part of V to interpolate between WY([V]) and WY([V]+1) according to the relations
β=V−[V]
fY(J)=(1−β)*WY([V])+β*WY([V+1]).
The weight function value f(I,J) is determined by multiplying fX(I) and fY(J), i.e., f(I,J)=fX(I)*fY(J).
The compensation unit may include a first adder-accumulator for incrementing the horizontal position variable U and a second adder-accumulator for incrementing the vertical position variable V. The step size SX used by the first adder-accumulator may vary from one region to the next. Similarly, the step size SY used by the second adder-accumulator may vary from one region to the next.
In one set of embodiments, the compensation unit may include the circuitry 300 illustrated in
Control logic (to be described more fully later) determines when the video is entering a region. When the video enters a region by virtue of crossing one of the vertical boundaries (e.g., I=I1), the control logic induces multiplexor 314 to select a corresponding one of the initial positions U0, U1 and U2 and multiplexor 310 to select a corresponding one of the step-sizes Sx0, Sx1 and Sx2. Inside the region, the control logic induces multiplexor 314 to select the input from the register U. Thus, a first adder-accumulator, including adder 312, the register labeled U and the feedback path 313, serves to repeatedly increment the horizontal position variable U in response to transitions of the pixel clock. The initial positions U0, U1 and U2 and step-sizes Sx0, Sx1 and Sx2 are programmable.
Similarly, when the video enters a region by virtue of hitting the last pixel in one of the horizontal boundaries (e.g., J=J1), the control logic induces multiplexor 334 to select a corresponding one of the initial positions V0, V1 and V2 and multiplexor 330 to select a corresponding one of the step-sizes Sy0, Sy1 and Sy2. Inside the region, the control logic induces multiplexor 334 to select the input from register V. Thus, a second adder-accumulator including adder 332, the register V and the feedback path 333 serves to repeatedly increment the vertical position variable V in response to transitions of the Hsync signal. The initial positions V0, V1 and V2 and step-sizes Sy0, Sy1 and Sy2 are programmable.
In each pixel clock cycle, the integer parts of U and U+1 may be used as addresses to access the horizontal weight table 319 for the values WX([U]) and WX([U+1]) respectively. The integer part of U is realized by a high-order subset of the output lines supplied by multiplexor 314. The integer part of U+1 is realized by a high-order subset of the output lines supplied by adder 316. The value WX([U]) is multiplied by the value (1−α) supplied by the subtraction circuit 318 to determine a first product. The value WX([U+1]) is multiplied by the value a to determine a second product. These multiplications may be performed in parallel by multipliers 320 and 322 respectively. The value α is realized by a lower-order subset (e.g., the fractional part) of the output lines supplied by multiplexor 314. Adder 324 adds the first product and second product to determine the horizontal weight value fX(I).
In parallel with the operations described above, in each pixel clock cycle, the integer parts of V and V+1 may be used as addresses to access the vertical weight table 339 for the values WY([V]) and WY([V+1]) respectively. The integer part of V is realized by a high-order subset of the output lines supplied by multiplexor 334. The integer part of V+1 is realized by a high-order subset of the output lines supplied by adder 336. The value WY([V]) is multiplied by the value (1−β) supplied by the subtraction circuit 338 to determine a third product. The value WY([V+1]) is multiplied by the value β to determine a fourth product. These multiplications may be performed in parallel by multipliers 340 and 342 respectively. The value β is realized by a lower-order subset (e.g., a fractional part) of the output lines supplied by multiplexor 334. Adder 344 adds the third product and fourth product to determine the vertical weight value fY(J).
Multiplier 350 multiplies the values fX(I) and fY(J) to determine the pixel weight value f(I,J). Multipliers 352, 354 and 356 multiply the respective color components R, G and B of the pixel Q(I,J) by the pixel weight value f(I,J) in parallel to determine scaled color values R′, G′ and B′.
As described above, the domain of the compensation function f is the entire video raster, and thus, compensation is applied to the whole video raster. However, in some embodiments, the compensation function may have as its domain some subset (or a union of subsets) of the video raster. For example, to concentrate exclusively on correction of a central hot spot in the image 200, the domain of the compensation function may be restricted to central region 5. Thus, the NX entries of the horizontal weight table may map onto the interval [I1],I2] in the horizontal pixel index I, and the NY entries of the vertical weight table may map onto the interval [J1,J2] in the vertical pixel index J.
Note that the embodiment illustrated in
The situation illustrated in
As described above, the control logic asserts selection signals when the video enters each region. Because the boundaries between regions are described by the equations I=I1(J), I=I2(J), J=J1 and J=J2, the control logic is configured to detect if and when these conditions become true as the two-dimensional raster index (I,J) scans through the video raster. The boundary functions I=I1(J) and I=I2(J) may be continuous functions, e.g., affine functions.
Note that the horizontal step size (i.e., the step in horizontal table position U with respect to increment in horizontal index I) within a region changes as a function of the vertical pixel index J. The adjustment of the horizontal step sizes may be accomplished by repeatedly adding and accumulating a corresponding secondary delta to the horizontal step size. See the description of
While the boundary functions I=I1(J) and I=I2(J) are continuous, the corresponding boundaries between regions are jagged edges as suggested in
However, the weight values stored into the horizontal and vertical weight tables may be chosen to approximate weighting functions that are continuous and differentiable at the boundaries. Thus, abrupt changes in intensity at the boundaries may be avoided. Therefore, observers should not perceive any jaggedness along the boundaries.
Rules for selecting U and U0, U1, U2
The video frame starts with the multiplexer selecting the register U0. The register U0 is also selected at the beginning of each horizontal line. In any horizontal line, the register U2 may be selected when the horizontal pixel index I traverses the boundary function I1(J). In any horizontal line, the register U2 may be selected when the horizontal pixel index I traverses the boundary function I2(J). In any horizontal line, for each pixel clock after having selected register UK and prior to selecting UK+1, the U register is selected, thereby allowing a Bresenham-style DDA (digital differential accumulator) to produce the address for the horizontal weight table.
The timing of the Uk select signal for the first horizontal line is shown in
The registers U0, U1 and U2 may be loaded once when video is initialized.
Rules for selecting Sx0, Sx1, Sx2
Prior to the start of the current video frame, the multiplexer 410 selects Sx2 (that's where it was left at the end of the last video frame). When the horizontal pixel index I equals zero, the multiplexer 410 selects Sx0. When the horizontal pixel index I traverses the boundary function I1(J), the multiplexer 410 selects Sx1. When the horizontal pixel index I traverses the boundary function I2(J), the multiplexer selects Sx2. This selection scheme ensures that the multiplexer 410 has been stable for exactly 1 clock period before the output of the multiplexer 410 is used to compute U. Note that the registers Sx0, Sx1 and Sx2 have been re-initialized to their original values during vertical retrace time.
The register Sxk, k=0, 1, 2, may be loaded with the output of adder 408 on the pixel clock after the register Sxk+1 is selected by multiplexer 410, thereby loading the value SxK+dSxK into the SxK register. (If k=2, SxK+1 is taken to be Sx0.) This gives the multiplexer 410 the maximum possible settling time before its output will be used to compute U.
Rules for selecting dSx0, dSx1, dSx2.
The video frame starts out selecting dSx0. On the pixel clock after loading register SxK from the output of adder 400, the multiplexer 406 selects register dSxk+1. (If k=2, dSxK+1 is taken to be dSx0.) This selection scheme ensures that multiplexer 406 has been stable for awhile (more than a clock period) before the output of multiplexer 406 is used to modify any of the registers Sx0, Sx1 or Sx2.
Counting Through the Regions
Decrement unit 739 receives the selected current region width XCntk′ from multiplexer 738 and stores the selected current region width XCntk′ in an internal register denoted XCnt. Decrement unit 739 decrements the value of the XCnt register by one in response to each rising-edge (or alternatively, falling edge) in the pixel clock signal. The state machine 710 may be configured to detect when the value of the XCnt register reaches zero as this event indicates that the horizontal pixel index I has traversed a region boundary. When the value of the XCnt register reaches zero, the state machine 710 may assert the Sel_Uk signal so that the multiplexer 414 (of
The pixel widths may vary from one line to the next. For example, in
The state machine 710 also drives the Sel_dXCnt signal which controls the multiplexer 732 so that adder 734 can perform the increment computation given by the expression XCntk′→XCntk′+dXCntk at times governed by the Init_XCnt signal.
The state machine 710 may be further configured to assert the control signals Ld_Syk, Sel_Syk and Sel_Vk and the control signals Sel_dSxk, Ld_Sxk, Sel_Sxk and Sel_Uk, k=0, 1, 2. The signal Ld_Syk determines which of the registers SYK gets loaded. The signal Sel_Syk determines which register SYK gets selected by multiplexer 430. The signal Sel_Vk determines which input gets selected by multiplexer 434. The signal Sel_dSxk determines which input gets selected by multiplexer 406. The signal Ld_Sxk determines which of the registers SxK gets loaded. The signal Sel_Sxk determines which input of multiplexer 410 gets selected. The signal Sel_Uk determines which input of multiplexer 414 gets selected. The timing of these control signals has been discussed above in connection with
Decrement unit 716 receives the selected region height YCntK from multiplexer 715 and stores this value in an internal register YCnt. Decrement unit 716 decrements the register value YCnt by one in response to each transition of a horizontal synchronization signal. The horizontal synchronization signal is also supplied to the state machine 710. When the register value YCnt reaches zero, the state machine 710 may assert the Sel_Vk signal to a value that induces multiplexer 434 to select the initial address VK+1 corresponding to the region just entered. When the register value YCnt is not equal to zero, the state machine 710 may assert the Sel_Vk signal so that multiplexer 424 selects the V input.
In the pixel clock after YCnt reaches zero, state machine 710 may drive signal Sel_YCnt′ so that multiplexer 715 selects the value YCntK+1 corresponding to the region just entered.
State machine may also receive a vertical blanking signal so that it knows when to reset the registers XCntk′, k=1, 2, 3, to their initial values.
The controller 700 may use a DDA (digital differential accumulator) structure to modify the pixel widths of the regions. All additions are signed, and the delta values are allowed to take on positive or negative values, so that the counts may increase or decrease over time.
Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Number | Name | Date | Kind |
---|---|---|---|
5444478 | Lelong et al. | Aug 1995 | A |
5594845 | Florent et al. | Jan 1997 | A |
5734446 | Tokoro et al. | Mar 1998 | A |
5956000 | Kreitman et al. | Sep 1999 | A |
5966177 | Harding | Oct 1999 | A |
5978142 | Blackham et al. | Nov 1999 | A |
6017123 | Bleha et al. | Jan 2000 | A |
6069668 | Woodham et al. | May 2000 | A |
6222593 | Higurashi et al. | Apr 2001 | B1 |
6304245 | Groenenboom | Oct 2001 | B1 |
6367933 | Chen et al. | Apr 2002 | B1 |
6590621 | Creek et al. | Jul 2003 | B1 |
6771272 | Deering | Aug 2004 | B2 |
7002589 | Deering | Feb 2006 | B2 |
20020008697 | Deering | Jan 2002 | A1 |
20020012004 | Deering | Jan 2002 | A1 |