The present disclosure relates generally to projector-based display systems. More particularly, the present disclosure relates to viewpoint compensation for curved display surfaces in projector-based display systems.
One challenge in creating projector-based display systems is compensating for geometric distortions such as keystoning caused by misalignment of the projector and the display surface, radial dispersion of light, and the like. For clarity, these types of distortions are referred to herein as “projection distortions.” An additional challenge arises when the display surface is curved. The curvature of the display surface can cause highly viewpoint-dependent geometric distortions. That is, the geometric distortions appear to vary significantly based on viewpoint. For clarity, these types of distortion are referred to herein as “viewpoint distortions.”
In each of
This is not the case when the same image 102 is projected in the same way upon a curved display surface. Referring to
In general, in one aspect, an embodiment features tangible computer-readable media embodying instructions executable by a computer to perform a method comprising: generating a second image based on a first image and a viewpoint transform, wherein the viewpoint transform represents a mapping between pixel locations of the first image and coordinates of a model of a curved display surface;
and generating a third image based on the second image and a projection transform, wherein the projection transform represents a mapping between the coordinates of the model of the curved display surface and pixel locations of a projector; wherein the third image is projected upon the curved display surface by the projector.
Embodiments of the tangible computer-readable media can include one or more of the following features. In some embodiments, the method further comprises: generating the viewpoint transform. In some embodiments, generating the viewpoint transform comprises: generating the model of the curved display surface. In some embodiments, the method further comprises: generating the projection transform. In some embodiments, generating the second image comprises: rendering the second image based on the first image. In some embodiments, rendering the second image comprises: modifying vertices of the first image according to the viewpoint transform. In some embodiments, generating the third image comprises: rendering the third image based on the second image. In some embodiments, rendering the third image comprises: modifying fragments of the second image according to the projection transform.
In general, in one aspect, an embodiment features an apparatus comprising: a viewpoint module adapted to generate a second image based on a first image and a viewpoint transform, wherein the viewpoint transform represents a mapping between pixel locations of the first image and coordinates of a model of a curved display surface; and a projection module adapted to generate a third image based on the second image and a projection transform, wherein the projection transform represents a mapping between the coordinates of the model of the curved display surface and pixel locations of a projector; wherein the third image is projected upon the curved display surface by the projector.
Embodiments of the apparatus can include one or more of the following features. Some embodiments comprise a viewpoint transform module adapted to generate the viewpoint transform. In some embodiments, the viewpoint transform module comprises: a model module adapted to generate the model of the curved display surface. Some embodiments comprise a projection transform module adapted to generate the projection transform. In some embodiments, the viewpoint module comprises: a render module adapted to render the second image based on the first image. In some embodiments, the render module comprises: a vertex shader adapted to modify vertices of the first image according to the viewpoint transform. In some embodiments, the projection module comprises: a render module adapted to render the third image based on the second image. In some embodiments, the render module comprises: a fragment shader adapted to modify fragments of the second image according to the projection transform.
In general, in one aspect, an embodiment features a method comprising: generating a second image based on a first image and a viewpoint transform, wherein the viewpoint transform represents a mapping between pixel locations of the first image and coordinates of a model of a curved display surface; and generating a third image based on the second image and a projection transform, wherein the projection transform represents a mapping between the coordinates of the model of the curved display surface and pixel locations of a projector; wherein the third image is projected upon the curved display surface by the projector.
Embodiments of the method can include one or more of the following features. Some embodiments comprise generating the viewpoint transform. In some embodiments, generating the viewpoint transform comprises: generating the model of the curved display surface. In some embodiments, wherein generating the second image comprises: rendering the second image based on the first image. In some embodiments, rendering the second image comprises: modifying vertices of the first image according to the viewpoint transform. In some embodiments, generating the third image comprises: rendering the third image based on the second image. In some embodiments, rendering the third image comprises: modifying fragments of the second image according to the projection transform.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
The leading digit(s) of each reference numeral used in this specification indicates the number of the drawing in which the reference numeral first appears.
Embodiments provide viewpoint compensation for curved display surfaces in projector-based display systems.
Embodiments of the present disclosure provide a two-step approach that is conceptually illustrated in
Embodiments provide two transforms: a viewpoint transform 410 and a projection transform 408. Viewpoint transform 410 represents a mapping between pixel locations of image template 406 and coordinates of model 404. Projection transform 408 represents a mapping between coordinates of model 404 and pixel locations of projector template 402. Viewpoint transform 410 is used to compensate for viewpoint distortion, while projection transform 408 is used to compensate for projection distortion.
One advantage of using two separate transforms 408, 410 is that a viewpoint transform 410 for a particular curved display surface can be modified or exchanged for another viewpoint transform 410 for that particular curved display surface without changing the projection transform 408 for that curved display surface. Therefore the viewpoint compensation can be modified without the need for re-generating projection transform 408. In embodiments of the present disclosure, generation of the projection transform 408 is independent of generation of the viewpoint projection transform 410.
Referring to
Referring to
Projection transform module 510 generates a projection transform 408 (step 604). In particular, calibration module 528 generates a calibration mapping 530. For example, projector 512 can project a digital calibration image upon curved display surface 514. A digital representation of the projection of the digital calibration image can be captured, for example by a digital camera. Calibration module 528 then generates calibration mapping 530 between pixels of the digital representation of the projection and pixels of the digital calibration image. Projection transform 408 represents calibration mapping 530. Conventional techniques can be used to generate projection transform 408.
Viewpoint module 504 receives an input image I1 (step 606). In some cases, image I1 is generated by image module 502 (step 608). In some cases, render module 516 of image module 502 renders image I1 based on a scene S. For example, scene S can be an OpenGL scene. Render module 516 renders the OpenGL scene to a texture. The texture is image I1. In other cases, image module 502 can generate image I1 in other ways. In some cases, image I1 is simply provided as a bitmap image or the like. Image I1 conforms to image template 406.
Viewpoint module 504 generates a second image I2 based on image I1 and viewpoint transform 410 (step 610). Viewpoint transform 410 represents a mapping between pixel locations of image I1 and coordinates of model 404 of curved display surface 514. For example, render module 518 of viewpoint module 504 renders image I2 based on image I1 and viewpoint transform 410. In some embodiments, viewpoint transform 410 is implemented by vertex shading during rendering. In these embodiments, render module 518 includes a vertex shader 520 that modifies the vertices of image I1 during rendering according to viewpoint transform 410. For example, OpenGL or the like can be used for the rendering. The rendering can be performed by a graphics processing unit of a video card or the like.
Projection module 506 generates a third image I3 based on image I2 and projection transform 408 (step 612). Projection transform 408 represents a mapping between the coordinates of model 404 of curved display surface 514 and pixel locations of projector 512. For example, render module 522 of projection module 506 renders image I3 based on image I2 and projection transform 408. In some embodiments, projection transform 408 is implemented by fragment shading during rendering. In these embodiments, render module 522 includes a fragment shader 524 that modifies the fragments of image I2 during rendering according to projection transform 408. For example, OpenGL or the like can be used for the rendering. The rendering can be performed by a graphics processing unit of a video card or the like.
Projector 512 projects image I3 upon curved display surface 514 (step 614). Image I3 conforms to projector template 402.
In some embodiments, projector 512 is implemented as multiple projectors, for example in order to produce a tiled display where each projector projects a different portion of an image, to produce a super-bright display where the projections overlap in order to obtain a very bright projection, and the like.
Referring to
Each viewpoint module 704 receives the same input image I1 (step 806). In some cases, image I1 is generated by image module 702 (step 808), for example as described above. Each of viewpoint modules 704A-704N generates a respective second image 12A-12N based on image I1 and viewpoint transform 718 (step 810), for example as described above. For increased efficiency, each viewpoint module 710 can operate upon only that portion of image I2 to be projected by the corresponding projector 712.
Each of projection modules 706A-706N generates a respective third image 13A-13N based on the respective second image 12A-12N and the respective projection transform 720A-720N (step 812), for example as described above. Each of projectors 712A-712N projects the respective image 13A-13N upon curved display surface 714 to form a single composite image (step 814).
Various embodiments can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Apparatus can be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions by operating on input data and generating output. Embodiments can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program can be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language can be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Generally, a computer will include one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks. Any of the foregoing can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of this disclosure. Accordingly, other implementations are within the scope of the following claims.