The present invention relates to projection systems where multiple projectors are utilized to create respective complementary portions of a projected image, which may be a video or still image. The present invention also relates to methods of calibrating the intensity response function of projectors. According to one embodiment of the present invention, a method of projecting an image utilizing a plurality of projectors is provided. According to the method, at least two of the projectors project overlapping portions of the image in a global display space. Overlapping pixels in the global display space are identified. An attenuation map is generated for the projectors such that the attenuation values for the overlapping pixels of the attenuation map are at least partially a function of the number of projectors contributing to the overlap. Pixel intensity values are established for the projectors by applying one or more intensity transfer functions to the attenuation maps generated for the projectors. The intensity transfer functions may be applied to the attenuation maps in conjunction with input intensity values. The intensity transfer functions are configured to at least partially account for the non-linear response of the output intensity of the projectors, as a function of an input intensity control signal applied to the projectors.
In accordance with another embodiment of the present invention, pixel intensity values for the projectors are established by utilizing the attenuation maps such that the pixel intensity values are a function of the relative magnitudes of the distance-to-edge values for the edges of the projectors that overlap the selected pixel.
In accordance with yet another embodiment of the present invention, pixel intensity values are established for the projectors by perturbing the attenuation maps to reduce image artifacts in the image, such that the pixel intensity values are a function of an attenuation map perturbation routine.
In accordance with yet another embodiment of the present invention, a method of calibrating the intensity response function of a projector or operating the image projector according to a calibrated intensity response function is provided. According to the method, a calibration image comprising an intensity-adjusted portion and a dithered portion is generated. Pixels of the intensity-adjusted portion are driven at a fraction of maximum input intensity and the ratio of the on/off pixels in the dithered portion is selected to correspond to the fractional input intensity of the intensity-adjusted portion. Pixel intensity in either the intensity-adjusted portion or the dithered portion is adjusted to match the apparent intensity of the remaining portion. This pixel intensity adjustment is repeated for one or more additional fractional input intensities and the intensity adjustments are used to establish a calibrated intensity response function for the image projector. The intensity transfer function is configured to at least partially account for the non-linear response of the output intensity of the projector as a function of an input intensity control signal applied to the projector.
The following detailed description of specific embodiments of the present invention can be best understood when read in conjunction with the following drawings, where like structure is indicated with like reference numerals and in which:
The flow chart of
According to the embodiments of the present invention illustrated in
The present invention also contemplates the use of intensity transfer functions in establishing pixel intensity values. As is illustrated in
Pixel intensity values may also be subject to an artifact reduction operation, if desired (see step 108). More specifically, acknowledging that some applications of the methodology the present invention may yield visual artifacts in the projected image, an artifact reduction operation may be applied to reduce any problematic artifact structure in the attenuation maps. For example, the process of
Generally, various embodiments of the present invention contemplate a routine for generating an attenuation map where the attenuation maps are compared to a model of the human visual system in order to determine what characteristics of the attenuation map are likely to lead to artifacts that can be readily detected by the human visual system. Once identified, the attenuation map can be modified to account for such artifacts and minimize error given by the model.
Turning initially to the process for establishing transfer functions illustrated in
It is contemplated that the intensity transfer functions established according to the process illustrated in
Similarly, the dithered portion of the calibration image is driven such that the ratio of on/off pixels in the dithered portion corresponds to the fractional input intensity of the intensity-adjusted portion. For example, where the pixels of the intensity-adjusted portion are driven at 50% intensity, the dithered portion will be driven such that 50% of the pixels in the dithered portion are in the “on” state and 50% of the pixels in the dithered portion are in the “off” state. The on/off pixels should be arranged in a checkerboard pattern or some other pattern selected to create an apparently uniform intensity distribution in the dithered portion of the calibration image.
Upon display of the calibration image, the pixel intensity in the dithered portion is adjusted until the apparent intensity of the dithered portion matches that of the intensity-adjusted portion (see step 312). Alternatively, the pixel intensity in the intensity-adjusted portion can be adjusted until its apparent intensity matches that of the dithered portion. In either case, the pixel intensity adjustments are carried out for one or more additional fractional input intensities (see steps 305 and 316) and the adjustments are used to establish the intensity transfer function for the selected color channel of the selected projector (see steps 314, 318, and 320). This intensity matching may be executed through the use of one or more image analysis cameras or by a human operator. As is noted above with respect to the process of
The process illustrated in
For example, and not by way of limitation, the proximity of a particular pixel to the edge of a projected image portion is directly proportional to the distance of the pixel from the projector. As such, at uniform input pixel intensity, the intensity of pixels near the edge of the projected image will be less than the intensity of pixels near the center of the projected image because the pixels near the edge of the image are farther away from the projector. Where a single pixel is overlapped by two projected images, in most cases, it will lie closer to the center of the image of one of the projectors than the other and, as such, one projector can be said to dominate pixel intensity for the selected pixel. Accordingly, the distance-to-edge values can be used to define a degree to which each projector dominates pixel intensity for the pixels in the overlapping portions of the image. Pixel intensity can then be established such that more dominant projectors are subject to less attenuation for pixels where the distance-to-edge values define the projector as the more dominant projector.
It is also contemplated that the distance-to-edge data for each pixel can merely be selected to represent the respective distances of a selected pixel from the closest edges of any projected image portion in which the pixel resides, as opposed to the edges of each image portion in which the pixel resides. Given this data, an average or weighted average of the distance-to-edge values for the selected pixel can be used in establishing the pixel intensity values for the selected pixel. For example, the average distance of the pixel to the three closest image portion edges can be determined and used to establish pixel intensity.
The distance-to-edge data can be utilized to alter pixel intensity independent of the intensity adjustment effectuated by the aforementioned intensity transfer functions. For example, the relative magnitudes of the distance-to-edge values for the selected pixel can be used in establishing the pixel intensity values for selected pixels in the attenuation map, prior to input of the intensity transfer functions, as is illustrated in
It is noted that recitations herein of a component of the present invention being “configured” to embody a particular property, function in a particular manner, etc., are structural recitations, as opposed to recitations of intended use. More specifically, the references herein to the manner in which a component is “configured” denotes an existing physical condition of the component and, as such, is to be taken as a definite recitation of the structural characteristics of the component.
It is noted that terms like “preferably,” “commonly,” and “typically” are not utilized herein to limit the scope of the claimed invention or to imply that certain features are critical, essential, or even important to the structure or function of the claimed invention. Rather, these terms are merely intended to highlight alternative or additional features that may or may not be utilized in a particular embodiment of the present invention.
Having described the invention in detail and by reference to specific embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims. More specifically, although some aspects of the present invention are identified herein as preferred or particularly advantageous, it is contemplated that the present invention is not necessarily limited to these preferred aspects of the invention.
This application claims the benefit of U.S. Provisional Application Ser. No. 60/773,419, filed Feb. 15, 2006. This application is related to commonly assigned, copending U.S. patent application Ser. Nos. 11/735,258, filed Apr. 13, 2007, 11/737,817, filed Apr. 20, 2007, 11/737,821, filed Apr. 20, 2007, 11/737,823, filed Apr. 20, 2007, and 11/675,236, filed Feb. 15, 2007.
Number | Name | Date | Kind |
---|---|---|---|
4974073 | Inova | Nov 1990 | A |
5136390 | Inova et al. | Aug 1992 | A |
5734446 | Tokoro et al. | Mar 1998 | A |
6115022 | Mayer et al. | Sep 2000 | A |
6222593 | Higurashi et al. | Apr 2001 | B1 |
6434265 | Xiong et al. | Aug 2002 | B1 |
6456339 | Surati et al. | Sep 2002 | B1 |
6480175 | Schneider | Nov 2002 | B1 |
6545685 | Dorbie | Apr 2003 | B1 |
6570623 | Li et al. | May 2003 | B1 |
6590621 | Creek et al. | Jul 2003 | B1 |
6633276 | Jaynes | Oct 2003 | B1 |
6695451 | Yamasaki et al. | Feb 2004 | B1 |
6733138 | Raskar | May 2004 | B2 |
6753923 | Gyoten | Jun 2004 | B2 |
6814448 | Ioka | Nov 2004 | B2 |
6819318 | Geng | Nov 2004 | B1 |
7097311 | Jaynes et al. | Aug 2006 | B2 |
7119833 | Jaynes et al. | Oct 2006 | B2 |
7133083 | Jaynes et al. | Nov 2006 | B2 |
7266240 | Matsuda | Sep 2007 | B2 |
20020024640 | Ioka | Feb 2002 | A1 |
20020041364 | Ioka | Apr 2002 | A1 |
20040085477 | Majumder et al. | May 2004 | A1 |
20040169827 | Kubo et al. | Sep 2004 | A1 |
20050287449 | Matthys et al. | Dec 2005 | A1 |
20070195285 | Jaynes et al. | Aug 2007 | A1 |
20070242240 | Webb et al. | Oct 2007 | A1 |
20070268306 | Webb et al. | Nov 2007 | A1 |
20070273795 | Jaynes et al. | Nov 2007 | A1 |
20080024683 | Damera-Venkata et al. | Jan 2008 | A1 |
20080129967 | Webb et al. | Jun 2008 | A1 |
20080180467 | Jaynes et al. | Jul 2008 | A1 |
20090262260 | Jaynes et al. | Oct 2009 | A1 |
20090284555 | Webb et al. | Nov 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20070188719 A1 | Aug 2007 | US |
Number | Date | Country | |
---|---|---|---|
60773419 | Feb 2006 | US |