This application is related to U.S. patent application Ser. No. 11/455,148, filed on the same date as this disclosure, and entitled SYSTEM AND METHOD FOR DISPLAYING IMAGES; U.S. patent application Ser. No. 11/455,303, filed on the same date as this disclosure, and entitled SYSTEM AND METHOD FOR GENERATING SCALE MAPS; U.S. patent application Ser. No. 11/455,149, filed on the same date as this disclosure, and entitled SYSTEM AND METHOD FOR PROJECTING MULTIPLE IMAGE STREAMS; and U.S. patent application Ser. No. 11/455,306, filed on the same date as this disclosure, and entitled MESH FOR RENDERING AN IMAGE FRAME.
Many cameras that capture images have planar image planes to produce planar images. Planar images captured by such cameras may be reproduced onto planar surfaces. When a viewer views a planar image that has been reproduced onto a planar surface, the viewer generally perceives the image as being undistorted, assuming no keystone distortion, even when the viewer views the image at oblique angles to the planar surface of the image. If a planar image is reproduced onto a non-planar surface (e.g., a curved surface) without any image correction, the viewer generally perceives the image as being distorted.
Display systems that reproduce images in tiled positions may provide immersive visual experiences for viewers. While tiled displays may be constructed from multiple, abutting display devices, these tiled displays generally produce undesirable seams between the display devices that may detract from the experience. In addition, because these display systems generally display planar images, the tiled images may appear distorted and unaligned if displayed on a non-planar surface without correction. In addition, the display of the images with multiple display devices may be inconsistent because of the display differences between the devices.
One form of the present invention provides a method performed by a processing system and including determining at least first and second distances between a first pixel location having a first pixel value in a first image frame and first and second edges of the first image frame, respectively, and determining a first factor that is proportional to a first product of the first and the second distances and configured to attenuate the first pixel value in response to the first pixel value being displayed by a first projector on a display screen such that the first pixel value overlaps with a second pixel value displayed by a second projector.
In the following Detailed Description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” etc., may be used with reference to the orientation of the Figure(s) being described. Because components of embodiments of the present invention can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
I. Generation and Display of Partially Overlapping Frames onto a Surface
Processing system 101 receives streams of image frames 102(1) through 102(M) where M is greater than or equal to one (referred to collectively as image data 102) using any suitable wired or wireless connections including any suitable network connection or connections. The streams of image frames 102(1) through 102(M) may be captured and transmitted by attached or remote image capture devices (not shown) such as cameras, provided by an attached or remote storage medium such as a hard-drive, a DVD or a CD-ROM, or otherwise accessed from one or more storage devices by processing system 101.
In one embodiment, a first image capture device captures and transmits image frames 102(1), a second image capture device captures and transmits image frames 102(2), and an Mth image capture device captures and transmits image frames 102(M), etc. The image capture devices may be arranged in one or more remote locations and may transmit the streams of image frames 102(1) through 102(M) across one or more networks (not shown) using one or more network connections.
In one embodiment, the number M of streams of image frames 102 is equal to the number N of projectors 112. In other embodiments, the number M of streams of image frames 102 is greater than or less than the number N of projectors 112.
Processing system 101 processes the streams of image frames 102(1) through 102(M) and generates projected images 114(1) through 114(N) (referred to collectively as projected images 114). Image frames 102 may be in any suitable video or still image format such as MPEG-2 (Moving Picture Experts Group), MPEG-4, JPEG (Joint Photographic Experts Group), JPEG 2000, TIFF (Tagged Image File Format), BMP (bit mapped format), RAW, PNG (Portable Network Graphics), GIF (Graphic Interchange Format), XPM (X Window System), SVG (Scalable Vector Graphics), and PPM (Portable Pixel Map). Image display system 100 displays images 114 in at least partially overlapping positions (i.e., in a tiled format) on a display surface 116.
Image frame buffer 104 receives and buffers image frames 102. Frame generator 108 processes buffered image frames 102 to form image frames 110. In one embodiment, frame generator 108 processes a single stream of image frames 102 to form one or more image frames 110. In other embodiments, frame generator 108 processes multiple streams of image frames 102 to form one or more image frames 110.
Frame generator 108 processes image frames 102 to define image frames 110(1) through 110(N) (collectively referred to as frames 110) using respective geometric meshes 126(1) through 126(N) (collectively referred to as geometric meshes 126) and respective photometric correction information 128(1) through 128(N) (collectively referred to as photometric correction information 128). Frame generator 108 provides frames 110(1) through 110(N) to projectors 112(1) through 112(N), respectively.
Projectors 112(1) through 112(N) store frames 110(1) through 110(N) in image frame buffers 113(1) through 113(N) (collectively referred to as image frame buffers 113), respectively. Projectors 112(1) through 112(N) project frames 110(1) through 110(N), respectively, onto display surface 116 to produce projected images 114(1) through 114(N) for viewing by one or more users. Projectors 112 project frames 110 such that each displayed image 114 at least partially overlaps with another displayed image 114.
Projected images 114 are defined to include any combination of pictorial, graphical, or textural characters, symbols, illustrations, or other representations of information. Projected images 114 may be still images, video images, or any combination of still and video images.
Display surface 116 includes any suitable surface configured to display images 114. In one or more embodiments described herein, display surface 116 forms a developable surface. As used herein, the term developable surface is defined as a surface that is formed by folding, bending, cutting, and otherwise manipulating a planar sheet of material without stretching the sheet. A developable surface may be planar, piecewise planar, or non-planar. A developable surface may form a shape such as a cylindrical section or a parabolic section. As described in additional detail below, image display system 100 is configured to display projected images 114 onto a developable surface without geometric distortion.
By displaying images 114 onto a developable surface, images 114 are projected to appear as if they have been “wallpapered” to the developable surface where no pixels of images 114 are stretched. The wallpaper-like appearance of images 114 on a developable surface appears to a viewer to be undistorted.
A developable surface can be described by the motion of a straight line segment through three-dimensional (3D) space.
When planar surface 130 is curved into a non-planar developable surface 140 without stretching as indicated by an arrow 136, the straight endpoint curves 132 and 134 become curved endpoint curves 142 and 144 in the example of
Image display system 100 may be configured to construct a two-dimensional (2D) coordinate system corresponding to planar surface 130 from which non-planar surface 140 was created using a predetermined arrangement of identifiable points in fiducial marks 118 on display surface 116. The geometry of the predetermined arrangement of identifiable points may be described according to distance measurements between the identifiable points. The distances between a predetermined arrangement of points may all be scaled by a single scale factor without affecting the relative geometry of the points, and hence the scale of the distances between the points on display surface 116 does not need to be measured. In the embodiment shown in
Non-planar developable display surfaces may allow a viewer to feel immersed in the projected scene. In addition, such surfaces may fill most or all of a viewer's field of view which allows scenes to be viewed as if they are at the same scale as they would be seen in the real world.
Image display system 100 attempts to display images 114 on display surface 116 with a minimum amount of distortion, smooth brightness levels, and a smooth color gamut. To do so, frame generator 108 applies geometric and photometric correction to image frames 102 using geometric meshes 126 and photometric correction information 128, respectively, in the process of rendering frames 110. Geometric correction is described in additional detail in Section II below, and photometric correction is described in additional detail in Section III below.
Frame generator 108 may perform any suitable image decompression, color processing, and conversion on image frames 102. For example, frame generator 108 may convert image frames 102 from the YUV-4:2:0 format of an MPEG2 video stream to an RGB format. In addition, frame generator 108 may transform image frames 102 using a matrix multiply to translate, rotate, or scale image frames 102 prior to rendering. Frame generator 108 may perform any image decompression, color processing, color conversion, or image transforms prior to rendering image frames 102 with geometric meshes 126 and photometric correction information 128.
Calibration unit 124 generates geometric meshes 126 and photometric correction information 128 using images 123 captured by at least one camera 122 during a calibration process. Camera 122 may be any suitable image capture device configured to capture images 123 of display surface 116. Camera 122 captures images 123 such that the images include fiducial marks 118 (shown as fiducial marker strips 118A and 118B in
In one embodiment, camera 122 includes a single camera configured to capture image 123 that include the entirety of display surface 116. In other embodiments, camera 122 includes multiple cameras each configured to capture images 123 that include a portion of display surface 116 where the combined images 123 of the multiple cameras include the entirety of display surface 116.
Without photometric correction, regions of overlap between images 114 may appear brighter than non-overlapping regions. In addition, variations between projectors 112 may result in variations in brightness and color gamut between projected images 114(1) through 114(6).
In addition, frame generator 108 may smooth any variations in brightness and color gamut between projected images 114(1) through 114(6) by applying photometric correction as described in Section III below. For example, frame generator 108 may smooth variations in brightness in overlapping regions such as an overlapping region 150 between images 114(1) and 114(2), an overlapping region 152 between images 114(2), 114(3), and 114(4), and an overlapping region 154 between images 114(3), 114(4), 114(5), and 114(6). Frame generator 108 may smooth variations in brightness between images 114 displayed with different projectors 112.
Processing system 101 includes hardware, software, firmware, or a combination of these. In one embodiment, one or more components of processing system 101 are included in a computer, computer server, or other microprocessor-based system capable of performing a sequence of logic operations. In addition, processing can be distributed throughout the system with individual portions being implemented in separate system components, such as in a networked or multiple computing unit environment.
Image frame buffer 104 includes memory for storing one or more image frames of the streams of image frames 102 for one or more image frames 110. Thus, image frame buffer 104 constitutes a database of one or more image frames 102. Image frame buffers 113 also include memory for storing frames 110. Although shown as separate frame buffers 113 in projectors 112 in the embodiment of
It will be understood by a person of ordinary skill in the art that functions performed by processing system 101, including frame generator 108 and calibration unit 124, may be implemented in hardware, software, firmware, or any combination thereof. The implementation may be via one or more microprocessors, graphics processing units (GPUs), programmable logic devices, or state machines. In addition, functions of frame generator 108 and calibration unit 124 may be performed by separate processing systems in other embodiments. In such embodiments, geometric meshes 126 and photometric correction information 128 may be provided from calibration unit 124 to frame generator 108 using any suitable wired or wireless connection or any suitable intermediate storage device. Components of the present invention may reside in software on one or more computer-readable mediums. The term computer-readable medium as used herein is defined to include any kind of memory, volatile or non-volatile, such as floppy disks, hard disks, CD-ROMs, flash memory, read-only memory, and random access memory.
II. Geometric Calibration and Correction of Displayed Images
Image display system 100 applies geometric correction to image frames 102 as part of the process of rendering image frames 110. As a result of the geometric correction, image display system 100 displays images 114 on display surface 116 using image frames 110 such that viewers may view images as being undistorted for all viewpoints of display surface 116.
Image display system 100 generates geometric meshes 126 as part of a geometric calibration process. Image display system 100 determines geometric meshes 126 using predetermined arrangements between points of fiducial marks 118. Image display system 100 determines geometric meshes 126 without knowing the shape or any dimensions of display surface 116 other than the predetermined arrangements of points of fiducial marks 118.
Frame generator 108 renders image frames 110 using respective geometric meshes 126 to unwarp, spatially align, and crop frames 102 into shapes that are suitable for display on display surface 116. Frame generator 108 renders image frames 110 to create precise pixel alignment between overlapping images 114 in the overlap regions (e.g., regions 150, 152, and 152 in
In the following description of generating and using geometric meshes 126, four types of 2D coordinate systems will be discussed. First, a projector domain coordinate system, Pi, represents coordinates in frame buffer 113 of the ith projector 112. Second, a camera domain coordinate system, Cj, represents coordinates in images 123 captured by the jth camera 122. Third, a screen domain coordinate system, S, represents coordinates in the plane formed by flattening display surface 116. Fourth, an image frame domain coordinate system, I, represent coordinates within image frames 102 to be rendered by frame generator 108.
Image display system 100 performs geometric correction on image frames 102 to conform images 114 from image frames 102 to display surface 116 without distortion. Accordingly, in the case of a single input image stream, the image frame domain coordinate system, I, of image frames 102 may be considered equivalent to the screen domain coordinate system, S, up to a scale in each of the two dimensions. By normalizing both coordinate systems to the range [0, 1], the image frame domain coordinate system, I, becomes identical to the screen domain coordinate system, S. Therefore, if mappings between the screen domain coordinate system, S, and each projector domain coordinate system, Pi, are determined, then the mappings from each projector domain coordinate system, Pi, to the image frame domain coordinate system, I, may determined.
Let Pi({right arrow over (s)}) be a continuous-valued function that maps 2D screen coordinates {right arrow over (s)}=(sx,sy) in S to coordinates {right arrow over (p)}=(px,i,py,i) of the ith projector 112's frame buffer 113. Pi is constructed as a composition of two coordinate mappings as shown in Equation 1:
{right arrow over (p)}i=Pi({right arrow over (s)})=Ci,j(Sj({right arrow over (s)})) (1)
where Sj({right arrow over (s)}) is a 2D mapping from display surface 116 to the image pixel locations of the jth observing camera 122, and Ci,j({right arrow over (c)}j) is a 2D mapping from image pixel locations {right arrow over (c)}=(cx,j, cy,j) of the jth observing camera 122 to the ith projector 112's frame buffer 113. If all Sj and Ci,j are invertible mappings, the mappings from projector frame buffers to the flattened screen are constructed similarly from the inverses of the Sj and Ci,j mappings, as shown in Equation 2:
{right arrow over (s)}=Pi−1({right arrow over (p)}i)=Sj−1(Ci,j−1({right arrow over (p)}i)) (2)
Hence, all coordinate transforms required by the geometric correction can be derived from the Sj and Ci,j mappings.
To handle a broad set of screen shapes, image display system 100 constructs generalized, non-parametric forms of these coordinate mappings. Specifically, for each mapping, image display system 100 uses a mesh-based coordinate transform derived from a set of point correspondences between the coordinate systems of interest.
Given a set of point correspondences between two 2D domains A and B, image display system 100 maps a point location {right arrow over (a)} in A to a coordinate {right arrow over (b)} in B as follows. Image display system 100 applies Delaunay triangulation to the points in A to create a first triangle mesh and then constructs the corresponding triangle mesh (according to the set of point correspondences) in B. To determine a point {right arrow over (b)} that corresponds to a point {right arrow over (a)}, image display system 100 finds the triangle in the triangle mesh in domain A that contains {right arrow over (a)}, or whose centroid is closest to it, and computes the barycentric coordinates of {right arrow over (a)} with respect to that triangle. Image display system 100 then selects the corresponding triangle from the triangle mesh in domain B and computes {right arrow over (b)} as the point having these same barycentric coordinates with respect to the triangle in B. Image display system 100 determines a point {right arrow over (a)} that corresponds to a point {right arrow over (b)} similarly.
The geometric meshes used to perform coordinate mappings have the advantage of allowing construction of coordinate mappings from point correspondences where the points in either domain may be in any arrangement other than collinear. This in turn allows greater flexibility in the calibration methods used for measuring the locations of the points involved in the point correspondences. For example, the points on display surface 116 may be located entirely outside the area used to display projected images 114, so that these points do not interfere with displayed imagery, and may be left in place while the display is in use. Other non-parametric representations of coordinate mappings, such as 2D lookup tables, are generally constructed from 2D arrays of point correspondences. In many instances it is not convenient to use 2D arrays of points. For example, a 2D array of points on display surface 116 may interfere with displayed imagery 114, so that these points may need to be removed after calibration and prior to use of the display. Also, meshes may more easily allow for spatial variation in the fineness of the coordinate mappings, so that more point correspondences and triangles may be used in display surface areas that require finer calibration. Finer mesh detail may be localized independently to specific 2D regions within meshes by using more point correspondences in these regions, whereas increased fineness in the rows or columns of a 2D lookup table generally affects a coordinate mapping across the entire width or height extent of the mapping. In many instances, a mesh-based representation of a coordinate mapping may also be more compact, and hence require less storage and less computation during the mapping process, than a similarly accurate coordinate mapping stored in another non-parametric form such as a lookup table.
To determine the correct projector frame buffer contents needed to render the input image like wallpaper on the screen, image display system 100 applies Equation 2 to determine the screen location {right arrow over (s)} that each projector pixel {right arrow over (p)} lights up. If {right arrow over (s)} is normalized to [0, 1] in both dimensions, then this is also the coordinate for the input image pixel whose color should be placed in {right arrow over (p)}, since wallpapering the screen effectively equates the 2D flattened screen coordinate systems S with the image coordinate system I. For each projector 112, image display system 100 uses Equation 2 to compute the image coordinates corresponding to each location on a sparsely sampled rectangular grid (e.g., a 20×20 grid) in the screen coordinate space. Graphics hardware fills the projector frame buffer via texture mapping image interpolation. Hence, the final output of the geometric calibration is one triangle mesh 126 per projector 112, computed on the rectangular grid.
Because the method just described includes a dense mapping to the physical screen coordinate system, it corrects for image distortion caused not only by screen curvature, but also due to the projector lenses. Furthermore, the lens distortion of the observing camera(s) 122, inserted by interposing their coordinate systems between those of the projectors and the screen, does not need to be calibrated and corrected. In fact, the method allows use of cameras 122 with extremely wide angle lenses, without any need for camera image undistortion. Because of this, image display system 100 may be calibrated with a single, wide-angle camera 122. This approach can even be used to calibrate full 360 degree displays, by placing a conical mirror in front of the camera lens to obtain a panoramic field-of-view.
Methods of performing geometric correction will now be described in additional detail with reference to the embodiments of
The methods of
In the embodiments described below, geometric meshes 126 will be described as triangle meshes where each triangle mesh forms a set of triangles where each triangle is described with a set of three coordinate locations (i.e., vertices). Each triangle in a triangle mesh corresponds to another triangle (i.e., a set of three coordinate locations or vertices) in another triangle mesh from another domain. Accordingly, corresponding triangles in two domains may be represented by six coordinate locations—three coordinate locations in the first domain and three coordinate locations in the second domain.
In other embodiments, geometric meshes 126 may be polygonal meshes with polygons with z sides, where z is greater than or equal to four. In these embodiments, corresponding polygons in two domains may be represented by 2z ordered coordinate locations—z ordered coordinate locations in the first domain and z ordered coordinate locations in the second domain.
In
Calibration unit 124 also generates camera-to-projector triangle meshes for each projector 112 as indicated in a block 204. In particular, calibration unit 124 generates a second triangle mesh in the camera domain and a corresponding triangle mesh in the projector domain for each projector 112. Calibration unit 124 generates these triangle meshes from known pattern sequences displayed by projectors 112 and a set of images 123 captured by camera 122 viewing display surface 116 while these known pattern sequences are projected by projectors 112.
Calibration unit 124 generates a screen-to-projector triangle mesh, also referred to as geometric mesh 126, for each projector 112 as indicated in a block 206. Calibration unit 124 generates geometric meshes 126 such that each geometric mesh 126 includes a set of points that are associated with a respective projector 112. Calibration unit 124 identifies the set of points for each projector 112 using the screen-to-camera triangle meshes and the camera-to-projector triangle meshes as described in additional detail below with reference to
Referring to
In
Calibration unit 124 locates fiducial marks 118 in image 123A as indicated in a block 214. Calibration unit 124 locates fiducial marks 118 to identify points where points are located according to a predetermined arrangement on display screen 116. For example, where fiducial marks 118 form a black and white checkerboard pattern as in the example shown in
In one embodiment, calibration unit 124 assumes the center of image 123A is inside the region of display surface 116 to be used for display, where this region is at least partially bounded by strips of fiducials marks 118, and where the region contains no fiducial marks 118 in its interior. The boundary of the region along which fiducial marks 118 appear may coincide with the boundary of display surface 116, or may fall entirely or partially in the interior of display surface 116.
Calibration unit 124 begins searching from the center of camera image 123A going upward for the lowest detected corner. Referring back to fiducial marker strip 118A in
Calibration unit 124 searches left from the interior corner for successive corners along fiducial marker strip 118A at the step distance (estimating the horizontal pattern step to be equal to the vertical pattern step), plus or minus a tolerance, until no more corners are detected in the expected locations. In traversing the image of the strip of fiducial marker strip 118A, calibration unit 124 predicts the location of the next corner in sequence by extrapolating using the pattern step to estimate the 2D displacement in camera image 123A from the previous corner to the next corner. By doing so, calibration unit 124 may follow accurately the smooth curve of the upper strip of fiducial marks 118 which appears in image 123A.
Calibration unit 124 then returns to the first fiducial location and continues the search to the right in a manner analogous to that described for searching to the left. Calibration unit 124 subsequently returns to the center of camera image 123A, and searches downward to locate a first corner in fiducial marks 118B. This corner is assumed to be on the top row of fiducial marker strip 118B. The procedure used for finding all corners in upper fiducial strip 118A is then carried out in an analogous way for the lower strip, this time using the corners in the row of fiducial strip 118B below the row containing the first detected corner. Searches to the left and right are carried out as before, and locations of all corners in the middle row of fiducial strip 118B are stored.
In
Referring to
Calibration unit 124 determines screen-to-camera triangle meshes using the set of correspondences 308 as indicated in a block 218. The screen-to-camera triangle meshes are used to map screen domain (S) 302 to camera domain (C) 312 and vice versa. Calibration unit 124 determines screen-to-camera triangle meshes using the method illustrated in
Referring to
Calibration unit 124 constructs a second triangle mesh in a second domain that corresponds to the first triangle mesh using a set of point correspondences as indicated in a block 224. Referring to
Calibration unit 124 uses the set of point correspondences 308 to ensure that triangles in triangle mesh 314 correspond to triangles in triangle mesh 304. For example, points 300A, 300B, and 300C correspond to points 310A, 310B, and 310C as shown by the set of point correspondences 308. Accordingly, because calibration unit 124 formed a triangle 304A in triangle mesh 304 using points 300A, 300B, and 300C, calibration unit 124 also forms a triangle 314A in triangle mesh 314 using points 310A, 310B, and 310C. Triangle 314A therefore corresponds to triangle 304A.
In other embodiments, calibration unit 124 may first a construct triangle mesh 314 in camera domain 312 (e.g. by Delaunay triangulation) and then construct triangle mesh 304 in screen domain 302 using the set of point correspondences 308.
In
Camera 122 captures a set of images 123B (shown in
Calibration unit 124 locates points of the known patterns in images 123B as indicated in a block 234. In
Referring to
In one embodiment, calibration unit 124 associates the centers-of-mass of the detected position code sets in the camera location image (i.e., points 400) with the centers-of-mass of the corresponding position code sets (i.e., points 410(i) of the known patterns) provided to frame-buffer 113 of projector 112 to generate the set of point correspondences 308.
Calibration unit 124 determines camera-to-projector triangle meshes using the set of correspondences 408(i) as indicated in a block 238. The camera-to-projector triangle meshes are used to map camera domain (C) 312 to projector domain (Pi) 412(i) and vice versa. Calibration unit 124 determines camera-to-projector triangle meshes using the method illustrated in
Referring to
Calibration unit 124 constructs a second triangle mesh in a second domain that corresponds to the first triangle mesh using a set of point correspondences as indicated in block 224. Referring to
Calibration unit 124 uses the set of point correspondences 408(i) to ensure that triangles in triangle mesh 414(i) correspond to triangles in triangle mesh 404. For example, points 400A, 400B, and 400C correspond to points 410(i)A, 410(i)B, and 410(i)C as shown by the set of point correspondences 408(i). Accordingly, because calibration unit 124 formed a triangle 404A in triangle mesh 404 using points 400A, 400B, and 400C, calibration unit 124 also forms a triangle 414(i)A in triangle mesh 414(i) using points 410(i)A, 410(i)B, and 410(i)C. Triangle 414(i)A therefore corresponds to triangle 404A.
In other embodiments, calibration unit 124 may first construct triangle mesh 414(i) in projector domain 412(i) and then construct triangle mesh 404 in camera domain 312 using the set of point correspondences 408(i).
Referring back to block 206 of
The method
Referring to
Calibration unit 124 generates a set of point correspondences 508(1) between the set of points 500 in screen domain 302 and a set of points 510(1) in projector domain 412(1) using the screen-to-camera meshes and the camera-to-projector meshes for projector 112(1) as indicated in a block 244.
In
Calibration unit 124 determines barycentric coordinates for the point in the triangle in the screen domain as indicated in a block 254. In the example of
Calibration unit 124 applies the barycentric coordinates to a corresponding triangle in the camera triangle mesh (determined in block 218 of
Calibration unit 124 identifies a triangle in the camera triangle mesh (as determined in block 238 of
Calibration unit 124 determines barycentric coordinates for the point in the triangle in the camera domain as indicated in a block 260. In the example of
Calibration unit 124 applies the barycentric coordinates to a corresponding triangle in the projector triangle mesh (as determined in block 238 of
By performing the method of
Referring back to
In other embodiments, calibration unit 124 may first construct triangle mesh 126(1) in projector domain 412(1), using Delaunay triangulation or other suitable triangulation methods, and then construct triangle mesh 502 in screen domain 312 using the set of point correspondences 508(1).
Referring back to block 208 of
Referring to
Frame generator 108 determines barycentric coordinates for a pixel location in frame buffer 113(1) in the triangle of projector triangle mesh 126(1) as indicated in a block 274. In the example of
Frame generator 108 applies the barycentric coordinates to a corresponding triangle in screen triangle mesh 502 to identify a screen location, and hence a corresponding pixel location in image frame 102, as indicated in a block 276. In the example of
Interpolation of image color between pixel locations in image domain I may be used as part of this process, if the location determined in image frame 102 is non-integral. This technique may be implemented efficiently by using the texture mapping capabilities of many standard personal computer graphics hardware cards. In other embodiments, alternative techniques for warping frames 102 to correct for geometric distortion using geometric meshes 126 may be used, including forward mapping methods that map from coordinates of image frames 102 to pixel location in projector frame buffers 113 (via screen-to-projector mappings) to select the pixel colors of image frames 102 to be drawn into projector frame buffers 113.
By mapping frames 102 to projector frame buffers 113, frame generator 108 may warp frames 102 into frames 110 to geometrically correct the display of images 114.
Although the above methods contemplate the use of an embodiment of display system 100 with multiple projectors 112, the above methods may also be applied to an embodiment with a single projector 112.
In addition, the above method may be used to perform geometric correction on non-developable display surfaces. Doing so, however, may result in distortion that is visible to a viewer of the display surface.
III. Photometric Calibration and Correction of Displayed Images
Even after geometric correction, the brightness of projected images 114 is higher in screen regions of images 114 that overlap (e.g., regions 150, 152, and 154 shown in
Image display system 100 applies photometric correction to image frames 102 using photometric correction information 128 in the process of rendering image frames 110 to cause smooth brightness levels and color gamut across the combination of projected images 114 on display surface 116. Accordingly, image display system 100 attempts to produce a tiled display system that will not produce visually disturbing color variations in a displayed image 114 for an input image frame 102 of any single solid color. By doing so, image display system 100 may implement photometric correction while ensuring that projected images 114 appear reasonably faithful to the images of image frames 102.
Processing system 101 applies photometric correction by linearizing, scaling, and offsetting geometrically corrected frames 110A (shown in
Methods of performing photometric calibration and correction will now be described in additional detail with reference to the embodiments of
The methods of
In
Camera 122 may be operated in a linear output mode in capturing sets of images 123C and 123D to cause image values to be roughly proportional to the light intensity at the imaging chip of camera 122. If camera 122 does not have a linear output mode, the camera brightness response curve may be measured by any suitable method and inverted to produce linear camera image data.
In other embodiments, calibration unit 124 may cause any another suitable series of images to be projected and captured by camera 122.
Calibration unit 124 determines sets of inverse TRFs 700R, 700G, and 700B (shown in
To determine the sets of inverse TRFs 700R, 700G, and 700B, calibration unit 124 determines TRFs for each pixel location of each color plane of each projector 112 using the respective set of images 123C and geometric meshes 404 and 414(i), where i is between 1 and N. In other embodiments, calibration unit 124 may determine sets of inverse TRFs 700R, 700G, and 700B using other forms of geometric correction data that map camera locations to projector frame buffer locations. Interpolation between the measured gray levels in images 123C may be applied to obtain TRFs with proper sampling along the brightness dimension. Calibration unit 124 then derives the sets of inverse TRFs 700R, 700G, and 700B from the sets of TRFs as described in additional detail below with reference to
The generation of inverse TRFs is described herein for red, green, and blue color planes. In other embodiments, the inverse TRFs may be generated for other sets of color planes.
Calibration unit 124 determines a blend map 702 (shown in
Calibration unit 124 determines an offset map 704 for each projector 112 using a respective set of images 123D and respective geometric meshes 304, 314, 404, and 414(i) as indicated in a block 608. In other embodiments, calibration unit 124 may determine an offset map 704 using other forms of geometric correction data that map screen locations to projector frame buffer locations. Each offset map 704 includes a set of offset factors that are configured to be applied to a frame 110A to generate smooth black levels across the display of an image 114. The process of determining offset maps 704 is described in additional detail below with reference to
Calibration unit 124 determines a scale map 706 for each projector 112 using a respective set of images 123C, respective blend maps 702, and respective geometric meshes 304, 314, 404, and 414(i) as indicated in a block 610. In other embodiments, calibration unit 124 may determine a scale map 706 using other forms of geometric correction data that map screen locations to projector frame buffer locations. Each scale map 706 includes a set of attenuating factors that are configured to be applied to a frame 110A to generate smooth brightness levels across the display of an image 114. By forming each scale map 706 using a respective blend map 702, scale maps 706 may be configured to increase the overall smoothness of the brightness levels across the display of all images 114. The process of determining scale maps 706 is described in additional detail below with reference to
Photometric correction information 128 includes a blend map 702, an offset map 704, and a scale map 706 for each projector 112 in one embodiment. In other embodiments, photometric correction information 128 may omit one or more of a blend map 702, an offset map 704, and a scale map 706.
Referring to
Frame generator 108 applies a scale map 706 and a blend map 702 to a frame 110A as indicated in a block 614. More particularly, frame generator 108 multiplies the pixel values of frame 110A with corresponding scale factors in scale map 706 and blend map 702 as indicated by a multiplicative function 714. In one embodiment, frame generator 108 combines scale map 706 and blend map 702 into a single attenuation map 708 (i.e., by multiplying the scale factors of scale map 706 by the attenuation factors of blend map 702) and applies attenuation map 708 to frame 110A by multiplying the pixel values of frame 110A with corresponding attenuation factors in attenuation map 708. In other embodiments, frame generator 108 applies scale map 706 and blend map 702 separately to frame 110A by multiplying the pixel values of frame 110A with one of corresponding scale factors in scale map 706 or corresponding attenuation factors in blend map 702 and then multiplying the products by the other of the corresponding scale factors in scale map 706 or corresponding attenuation factors in blend map 702. By multiplying pixel values in frame 110A by attenuating factors from scale map 706 and blend map 702, frame generator 108 reduces the brightness of selected pixel values to smooth the brightness levels of a corresponding image 114.
Frame generator 108 applies an offset map 704 to a frame 110 as indicated in a block 616. Frame generator 108 adds the offset factors of offset map 704 to corresponding pixel values in frame 110 as indicated by an additive function 716. By adding pixel values in frame 110 with offset factors from offset map 704, frame generator 108 increases the brightness of selected pixel values to smooth the black level of the combination of projected images 114 across display surface 116.
Frame generator 108 applies sets of inverse TRFs 700R, 700G, and 700B to a frame 110A to generate a frame 110B as indicated in a block 618. Frame generator 108 applies inverse TRF 700R to the red color plane of a frame 110A, the inverse TRF 700G to the green color plane of a frame 110A, and the inverse TRF 700B to the blue color plane of a frame 110A to convert the pixel values in a frame 110. Frame generator 108 provides frame 110 to a corresponding projector 112.
In one embodiment, the above corrections may be combined into a single 3D lookup table (e.g., look-up tables 806R, 806G, and 806B shown in
Projector 112 projects frame 110B onto display surface 116 to form image 114 as indicated in a block 210. The remaining projectors 112 simultaneously project corresponding frames 110B to form the remaining images 114 on display surface 116 with geometric and photometric correction. Accordingly, the display of images 114 appears spatially aligned and seamless with smooth brightness levels across the combination of projected images 114 on display surface 116.
The generation of the sets of inverse TRFs 700R, 700G, and 700B will be described for red, green, and blue color planes. In other embodiments, the sets of inverse TRFs may be generated for other sets of color planes.
Referring to
Calibration unit 124 generates a set of curves for each color plane of a projector 112 by plotting, for a selected set of pixel locations of a projector 112, gray level values projected by a projector 112 versus projector output brightness values measured by a camera at corresponding pixel locations in the set of converted images 800 as indicated in a block 624. The selected set of pixel locations may include all of the pixel locations in projector 112, a subset of pixel locations in projector 112, or a single pixel location in projector 112.
As shown in
Calibration unit 124 normalizes the domain and range of each curve in each set of curves to [0, 1] as indicated in a block 626, and inverts the domain and range of each curve in each set of curves as indicated in a block 628. The inverted curves form inverse TRFs 700R, 700G, and 700B. In one embodiment, calibration unit 124 generates a separate inverse TRF for each pixel location for each color plane in the domain of projector 112. In other embodiments, calibration unit 124 may average a set of the normalized and inverted curves to form one inverse TRF 700R, 700G, and 700B for all or a selected set of pixel locations in each color plane.
Calibration unit 124 converts the inverted curves into any suitable render format as indicated in a block 630. In one embodiment, calibration unit 124 determines sets of functional fit parameters 808R, 808G, and 808B that best fit each inverse TRF 700R, 700G, and 700B to a functional form such as an exponential function. The fit parameters 808R, 808G, and 808B are later applied together with the functional form by frame generator 108 to render frames 110B to compensate for the non-linearity of the transfer functions of projectors 112.
In other embodiments, calibration unit 124 generates look-up tables 806R, 806G, and 806B from the sets of inverse tone reproduction functions 700R, 700G, and 700B. In one form, calibration unit 124 generates each look-up table 806R, 806G, and 806B as a three dimensional table with a different set of values for corresponding color values at each coordinate location of projector 112 for each color plane according to sets of inverse tone reproduction functions 700R, 700G, and 700B. In other forms, calibration unit 124 generates each look-up table 806R, 806G, and 806B as a one dimensional table with the same set or subset of values for corresponding color values at each coordinate location of projector 112 according to sets of inverse tone reproduction functions 700R, 700G, and 700B. The lookup tables are later applied by frame generator 108 to render frames 110B to compensate for the non-linearity of the transfer functions of projectors 112.
Referring to
In an example shown in
Calibration unit 124 generates a blend map 702 for each projector 112 with an attenuation factor for each pixel location located within the overlapping regions as indicated in a block 644. Referring to
In one embodiment, calibration unit 124 generates each attenuation factor to be in the range of zero to one. In this embodiment, calibration unit 124 generates the attenuation factors that correspond to a screen location across all blend maps 702 such that the sum of the attenuation factors corresponding to any screen location is equal to one. Thus, in the example of
In
Calibration unit 124 determines at least two distances between a second pixel location in a second frame 110A and edges of the second frame 110A as indicated in a block 650. In
Calibration unit 124 determines whether there is another overlapping frame 110A as indicated in a block 652. If there is not another overlapping frame 110A, as in the example of
In Equations 3 and 4, i refers to the ith projector 112 and k refers to the number of calculated distances for each pixel location in a respective frame 110A where k is greater than or equal to 2. Equation 3, therefore, is used to calculate each attenuation factor as a ratio of a product of distances calculated in a given frame 110A to a sum of the product of distances calculated in the given frame 110A and the product or products of distances calculated in the other frame or frames 110A that overlap with the given frame 110A.
In addition, εi({right arrow over (p)}i) forms a scalar-valued function over projector coordinates where εi({right arrow over (p)}i) goes to zero as {right arrow over (p)}i approaches any edge of a projector 112, and εi({right arrow over (p)}i) and the spatial derivative of εi({right arrow over (p)}i) are not discontinuous anywhere inside the coordinate bounds of the projector 112.
Using Equations 3 and 4, calibration unit 124 calculates the attenuation factor for location 922(1) in
Calibration unit 124 stores the attenuation factors in respective blend maps 702 as indicated in a block 658. In
In the example of
For pixel locations in regions of frames 110A that, when appearing as part of projected image 114 on display surface 116, do not overlap with any projected images 114 projected by other projectors 112, calibration unit 124 sets the attenuation factors in corresponding regions of blend maps 702 to one or any other suitable value to cause images 114 not to be attenuated in the non-overlapping regions on display surface 116. For example, calibration unit 124 sets the attenuation factors of all pixels in regions 926(1) and 926(2) of blend maps 702(1) and 702(2), respectively, to one so that blend maps 702(1) and 702(2) do not attenuate corresponding pixel locations in frames 110A(1) and 110A(2) and corresponding screen locations on display surface 116.
Referring back to block 652 of
In region 902 of
Likewise in region 904 of
In embodiments where k is equal to four as in the example of
In other embodiments, k is equal to two (i.e., two distances are calculated for each pixel location in a frame 110A). In embodiments where k is equal to two, calibration unit 124 uses the two shortest distances between pixel locations in overlapping frames 110A and the respective edges of frames 110A in Equations 3 and 4. To determine the shortest distances, calibration unit 124 may calculate all four distances between a pixel location in a frame 110A and the respective edges of frame 110A for each of the overlapping frames 110A and select the two shortest distances for each frame 110A for use in Equations 3 and 4.
Referring to
Calibration unit 124 applies a smoothing function 1004 to black level measurement map 1002 to generate a black level target map 1006 as indicated in a block 664. Calibration unit 124 derives black level target map 1006 from black level measurement map 1002 such that black level target map 1006 is spatially smooth across the display of images 114 on display surface 116.
In one embodiment, smoothing function 1004 represents an analogous version of the constrained gradient-based smoothing method applied to smooth brightness levels in “Perceptual Photometric Seamlessness in Projection-Based Tiled Displays”, A. Majumder and R. Stevens, ACM Transactions on Graphics, Vol. 24., No. 1, pp. 118-139, 2005 which is incorporated by reference herein. Accordingly, calibration unit 124 analogously applies the constrained gradient-based smoothing method described by Majumder and Stevens to the measured black levels in black level measurement map 1002 to generate black level target map 1006 in this embodiment.
In one embodiment of the constrained gradient-based smoothing method, pixels in black level target map 1006 corresponding to locations on display surface 116 covered by projected images 114 are initialized with corresponding pixel values from black level measurement map 1002. All pixels in black level target map 1006 corresponding to locations on display surface 116 not covered by projected images 114 are initialized to a value lower than the minimum of any of the pixels of black level measurement map 1002 corresponding to areas of display surface 116 covered by projected images 114. The pixels of black level target map 1006 are then visited individually in four passes through the image that follow four different sequential orderings. These four orderings are 1) moving down one column at a time starting at the left column and ending at the right column, 2) moving down one column at a time starting at the right column and ending at the left column, 3) moving up one column at a time starting at the left column and ending at the right column, and 4) moving up one column at a time starting at the right column and ending at the left column. During each of the four passes through the image, at each pixel the value of the pixel is replaced by the maximum of the current value of the pixel and the three products formed by multiplying each of the three adjacent pixels already visited on this pass by weighting factors. The weighting factors are less than one and enforce spatial smoothness in the resulting black level target map 1006, with higher weighting factors creating a more smooth result. The weighting factors may be derived in part from consideration of the human contrast sensitivity function, the expected distance of the user from the display surface 116, and the resolution of the projected images 114. This process is repeated independently for each color plane of black level target map 1006.
Calibration unit 124 generates an offset map 704 for each projector 112 using black level measurement map 1002, black level target map 1006, and the camera images 123D captured with relatively long exposure time as indicated in a block 666. Calibration unit 124 generates a set of offset values in each offset map 704 by first subtracting values in black offset measurement map 1002 from corresponding values in black level target map 1006 to generate sets of difference values. Calibration unit 124 divides each difference value in each set of difference values by the numbers of projectors 112 that project onto the screen locations that correspond to the respective difference values to generate sets of divided values. Calibration unit 124 interpolates between measured brightnesses at corresponding locations in captured images 123D to determine the projector inputs required to produce the divided values, and these projector inputs are used as the sets of offset values in offset maps 704. That is, at each pixel location in offset map 704, the corresponding location in images 123D is determined, and the measured brightnesses in 123D for different gray level inputs to corresponding projector 112 are examined to find the two images 123D whose measured brightnesses at this location bound above and below the corresponding divided value. Interpolation is performed on the projector input gray levels corresponding to these two images 123D to estimate the projector input required to produce the divided value. The estimated projector input is stored at the corresponding location in black offset map 704. In other embodiments, calibration unit 124 performs interpolation in other ways such as by using more than two images 123D.
Referring to
Calibration unit 124 maps measurement values in the set of captured images 123C into the screen coordinate domain using geometric meshes 304, 314, 404, and 414(i) to generate the white level measurement values in white level measurement map 1102. Calibration unit 124 then subtracts black level measurement values in black level measurement map 1002 from corresponding white level measurement values in white level measurement map 1102 to remove the black offset from white level measurement map 1102. Calibration unit 124 next applies blend maps 702 to white level measurement map 1102 by multiplying white level measurement values by corresponding attenuation factors of blend maps 702 to attenuate pixel values in the overlap regions of white level measurement map 1102. Accordingly, white level measurement map 1102 includes a set of white level measurement values from the set of captured images 123C for each screen location on display surface 116 that are adjusted by corresponding black level offset measurements in black level measurement map 1002 and corresponding attenuation factors in blend maps 702.
Calibration unit 124 applies a smoothing function 1104 to white level measurement map 1102 to generate a white level target map 1106 as indicated in a block 674. White level target map 1106 represents a desired, smooth white (maximum brightness) level across the display of images 114 on display surface 116.
In one embodiment, smoothing function 1104 represents the constrained gradient-based smoothing method applied to smooth brightness levels in “Perceptual Photometric Seamlessness in Projection-Based Tiled Displays”, A. Majumder and R. Stevens, ACM Transactions on Graphics, Vol. 24., No. 1, pp. 118-139, 2005 which is incorporated by reference herein. Accordingly, calibration unit 124 applies the constrained gradient-based smoothing method described by Majumder and Stevens to the measured white levels in white level measurement map 1102 to generate white level target map 1106.
In one embodiment of the constrained gradient-based smoothing method, pixels in white level target map 1106 corresponding to locations on display surface 116 covered by projected images 114 are initialized with corresponding pixel values from white level measurement map 1102. All pixels in white level target map 1106 corresponding to locations on display surface 116 not covered by projected images 114 are initialized to a value higher than the minimum of any of the pixels of black level measurement map 1102 corresponding to areas of display surface 116 covered by projected images 114. The pixels of white level target map 1106 are then visited individually in four passes through the image that follow four different sequential orderings. These four orderings are 1) moving down one column at a time starting at the left column and ending at the right column, 2) moving down one column at a time starting at the right column and ending at the left column, 3) moving up one column at a time starting at the left column and ending at the right column, and 4) moving up one column at a time starting at the right column and ending at the left column. During each of the four passes through the image, at each pixel the value of the pixel is replaced by the minimum of the current value of the pixel and the three products formed by multiplying each of the three adjacent pixels already visited on this pass by weighting factors. The weighting factors are greater than one and enforce spatial smoothness in the resulting white level target map 1106, with lower weighting factors creating a more smooth result. The weighting factors may be derived in part from consideration of the human contrast sensitivity function, the expected distance of the user from the display surface 116, and the resolution of the projected images 114. This process is repeated independently for each color plane of white level target map 1106.
Calibration unit 124 generates a scale map 706 for each projector 112 using white level measurement map 1102, white level target map 1106, and black level target map 1006 as indicated in a block 676. Calibration unit 124 generates a set of scale factors in each scale map 706 by first subtracting values in white attenuation target map 1006 from corresponding values in black level target map 1006 to generate sets of difference values. Calibration unit 124 divides each difference value in each set of difference values by corresponding values in white level measurement map 1102 to generate sets of scale factors in scale maps 706.
Calibration unit 124 generates an attenuation map 708 for each projector 112 using a respective scale map 706 and a respective blend map 702 as indicated in a block 678. Calibration unit 124 generates a set of attenuation factors in each attenuation map 708 by multiplying a corresponding set of scale factors from a corresponding scale map 706 by a corresponding set of attenuation factors from a corresponding blend map 702.
The derivation of offset maps 702 and attenuation maps 708 will now be described. Let I({right arrow over (s)}) be the three-channel color of an input image 102 to be displayed at screen location {right arrow over (s)}. By Equation 1, this is also the color corresponding to projector coordinate {right arrow over (p)}i=Pi({right arrow over (s)}) in image frame 110A. If it is assumed that the ith projector 112's TRF has been linearized by application of inverse TRF h−1(Ii,l) (e.g., by application of the sets of inverse TRFs 700R, 700G, and 700B), where l indicates the color plane in a set of color planes (e.g., RGB), then the projector output color L({right arrow over (p)}i) at pixel location {right arrow over (p)}i is as shown in Equation 5.
L({right arrow over (p)}i)=[G({right arrow over (p)}i)(W({right arrow over (p)}i)−B({right arrow over (p)}i))]*I(Pi({right arrow over (s)}))+B({right arrow over (p)}i) (5)
This is the equation of a line that, over the domain of I=[0, 1], has a minimum value at I=0 equal to the measured black offset B({right arrow over (p)}i) at the screen location corresponding to {right arrow over (p)}i, and a maximum value at I=1 equal to the measured white offset at the screen location corresponding to {right arrow over (p)}i after attenuation by geometric blend function G({right arrow over (p)}i) (e.g., by using the attenuation factors in blend maps 702).
To compensate for the linearity of the projector response, the input image color I is enhanced with an exponential function H (i.e., gamma function 712 in
L({right arrow over (p)}i)=[G({right arrow over (p)}i)(W({right arrow over (p)}i)−B({right arrow over (p)}i))]*H(I)+B({right arrow over (p)}i) (6)
For N projectors 112 overlapping at screen location {right arrow over (s)} on display surface 116, the expected output color on display surface 116 is obtained by summing Equation 6 across all projectors 112 as shown in Equation 7.
For I=0 and I=1, L({right arrow over (s)}) equates to black and white measurement map values B({right arrow over (s)}) and W({right arrow over (s)}), respectively.
The desired projector response at {right arrow over (s)}, defined by black level and white level target maps 1006 and 1106, respectively, computed as described above, is also a line, but with a different slope and intercept as shown in Equation 8.
L({right arrow over (s)})=H(I)*(Wt({right arrow over (s)})−Bt({right arrow over (s)}))+Bt({right arrow over (s)}) (8)
Equations 7 and 8 are brought into agreement by inserting into Equation 7 a scale factor α({right arrow over (p)}i) and offset factor β({right arrow over (p)}i) that are the same at all coordinates {right arrow over (p)}i corresponding to screen location {right arrow over (s)} for all projectors 112 overlapping at screen location {right arrow over (s)} as shown in Equation 9.
Equations 10 and 11 cause Equations 8 and 9 to be equal.
Intuitively, the value of α({right arrow over (p)}i) at a given screen location is the ratio of the target display dynamic range here (from the smoothed white level target map 1106 (Wt) down to the smoothed black level target map 1006 (Bt)) to the original measured dynamic range of the tiled display after geometric blending has been applied. β({right arrow over (p)}i) distributes the difference between black level target map 1006 Bt and black level measurement map 1002 B equally among projectors 112 overlapping at {right arrow over (s)}. Offset maps 704 used by frame generator 108 are described by β({right arrow over (p)}i), while attenuation maps 708 are described by α({right arrow over (p)}i)*G({right arrow over (p)}i). Because B, Bt, W, and Wt are all in three-channel color, the above method can produce separate results for each color channel.
Application of geometric blending using blend maps 702 during creation of white level measurement map 1102 W({right arrow over (s)}) and prior to the creation of white level target map 1106 Wt({right arrow over (s)}) may result in photometric calibration that is more tolerant of geometric calibration error. A white measurement map created without geometric blending may contain sharp brightness discontinuities at projector overlap region boundaries. In contrast, the method described herein blends projector contributions in overlap regions to produce a relatively smooth white level measurement map 1102 W({right arrow over (s)}) whose differences from uniformity reflect only the intrinsic brightness variations of projectors 112, rather than spatial overlap geometry. Elimination of discontinuities in white level measurement map 1102 (W({right arrow over (s)})) through geometric blending may yield smoother attenuation maps and allow for greater tolerance of geometric calibration imprecision.
IV. Projection of Multiple Image Streams
In one form of the invention, image display system 100 (
In one embodiment, user interface device 1218 is a mouse, a keyboard, or other device that allows a user to enter information into and interact with processing system 101. In one embodiment, display 1220 is a cathode ray tube (CRT) display, flat-panel display, or any other type of conventional display device. In another embodiment, processing system 101 does not include a processing system display 1220. Memory 1202 stores a plurality of different streams 1204(1)-1204(M) (collectively referred to as streams 1204), multimedia framework 1206, and stream processing software modules 1208. In one embodiment, streams 1204 are different video streams (e.g., the image content of each stream 1204 is different than the content of the other streams 1204) with or without associated audio streams. Geometric meshes 126 and photometric correction information 128 are stored in GPUs 1214 and 1216. In one embodiment, processing system 101 processes streams 1204 based on geometric meshes 126, photometric correction information 128, and user input (e.g., stream selection, transformation or modification parameters) entered via user interface device 1218, to generate composite or processed streams 1222(1)-1222(N) (collectively referred to as processed streams 1222), which are provided to projectors 112 for simultaneous projection onto display surface 116. In another embodiment, rather than, or in addition to, relying on user input, processing system 101 is configured to automatically generate stream modification or transformation parameters. In one embodiment, the number M of streams 1204 is equal to the number N of streams 1222. In other embodiments, the number M of streams 1204 is greater than or less than the number N of streams 1222. Processing system 101 is described in further detail below with reference to
In one embodiment, the six different displayed or projected streams 1302 are generated by projecting the four processed streams 1222 with four projectors 112 configured in a tiled arrangement to cover substantially the entire display surface 116. Six different streams 1204 are combined by processing system 101 into the four processed streams 1222 for projection by the four projectors 112. In another embodiment, more or less than four projectors 112 are used to produce the six different streams 1302. In one form of the invention, the display surface 116 is treated by processing system 101 as a single virtual display and multiple-stream content can be shown on the display surface 116 independent of the number of physical projectors 112 making up the display.
The projected streams 1302 can originate from any arbitrary video source. These sources can be local sources that are included in or coupled directly to processing system 101, and can be remote sources. The streams can arrive at varying rates at the processing system 101, and do not need to be synchronized with other streams being displayed. Live streams can be shown by display system 100 with very low latency.
As shown in
In one embodiment, the movement and rescaling operations shown in
In one embodiment, processing system 101 is configured to perform audio transformations on one or more audio streams associated with one or more of the projected streams 1302, such as fading audio in and out, and transforming audio spatially over the speakers of display system 100. In one embodiment, processing system 101 causes audio to be faded in for a selected stream 1302, and causes audio to be faded out for non-selected streams 1302.
In another embodiment of the present invention, processing system 101 is also configured to allow a user to manually reposition and rescale one or more of the projected streams 1302 using user interface 1218, and thereby allow a user to reposition the streams 1302 at any desired locations, and to rescale the streams 1302 to any desired size. In addition, in other embodiments of the invention, more or less than six different streams 1302 are simultaneously projected on surface 116 in any desired arrangement and size, and other emphasis options are available to a user (e.g., increasing the size of two streams 1302 while making four other streams 1302 smaller). In another embodiment, rather than, or in addition to, relying on user input, processing system 101 is configured to automatically generate stream modification or transformation parameters to modify the processed streams 1222 and correspondingly the projected streams 1302. For example, in one form of the invention, processing system 101 is configured to automatically position and scale the streams 1302 based on the number of streams and where the streams 1302 are coming from (such as in a video conferencing application), or based on other factors.
Characteristics or properties of each stream 1302 may be transformed independently by processing system 101. The properties that can be transformed according to one form of the invention include, but are not limited to: (1) Two-dimensional (2D) screen space location and size; (2) three-dimensional (3D) location in the virtual screen space; (3) blending factors; (4) brightness and color properties; and (5) audio properties. In one embodiment, properties of the streams 1302 are transformed automatically by processing system 101 in response to an action from a user, such as selecting one or more of the streams 1302 with user interface device 1218. In another embodiment, a user interacts with processing system 101 via user interface device 1218 and display 1220 to manually modify properties of one or more of the streams 1302.
In one embodiment, processing system 101 is configured to provide unconstrained transformations of the 2D and 3D properties of the streams 1302. 2D transformations allow the streams 1302 to be slid around the display surface 116, similar to how a window can be moved on a standard computer display, without any corresponding movement of the projectors 112. The 3D transformations include translations in depth, rotations, and scaling of the streams 1302.
Other types of image transformations are also implemented in other embodiments. Streams 1302 that overlap on the surface 116 are blended together by processing system 101 in one embodiment. Processing system 101 is configured to allow a user to dynamically adjust blending factors for projected streams 1302. Processing system 101 is also configured to allow a user to dynamically adjust brightness and color characteristics of projected streams 1302, allowing selected streams 1302 to be highlighted or deemphasized as desired. Processing system 101 is also configured to allow a user to perform cropping operations to selected streams 1302. In one embodiment, all transformations can be changed dynamically and independently for each stream 1302. The characteristics of the streams 1302 can be changed in real time while still maintaining the seamless nature of the display. In one form of the invention, processing system 101 is configured to combine one or more of the streams 1302 with non-stream content, such as 3D geometry or models. In a video conferencing application, for example, 2D video streams can be appropriately positioned by processing system 101 in a projected 3D model of a conference room.
In one embodiment, the majority of the runtime computation of processing system 101 is performed by the GPUs 1214 and 1216, rather than by the CPUs 1210 and 1212. By performing most of the runtime computation on the GPUs 1214 and 1216, the CPUs 1210 and 1212 are left free to receive and decompress multiple video and audio streams 1204. The GPUs 1214 and 1216 perform color processing and conversion on the streams 1204, if necessary, such as converting from the YUV-4:2:0 format generated by an Mpeg2 stream into RGB format for rendering. During geometric and photometric calibration, geometric meshes 126 and photometric correction information 128 are calculated as described above in Sections II and III, and the geometric meshes 126 and photometric correction information 128 are downloaded to the GPUs 1214 and 1216. At runtime, the geometric meshes 126 and photometric correction information 128 do not need to be recalculated and can stay resident on the GPUs 1214 and 1216 for the multiple stream rendering.
Before the streams 1204 are geometrically mapped by GPUs 1214 and 1216, the geometric characteristics (including location) of the streams 1204 can be transformed via a matrix multiply allowing any desired translation, rotation, or scaling to be applied to the streams 1204. The photometric correction information 128 is then combined with the streams 1204 by GPUs 1214 and 1216 to apply photometric correction and blending in overlap regions. In one embodiment, photometric correction is applied via fragment shader programs running on the GPUs 1214 and 1216. For every pixel that is to be displayed, the fragment program calculates the desired RGB color. The GPUs 1214 and 1216 then use a gamma function to map the pixel into the physical brightness space where the actual projected values combine. Photometric correction is done in this projected light space before an inverse gamma function brings the color values back to linear RGB.
The runtime processing performed by processing system 101 according to one form of the invention consists of acquiring streams 1204 from one or more sources, preparing the streams 1204 for presentation, and applying the geometric meshes 126 and photometric correction information 128 calculated during calibration. In one form of the invention, the real-time processing and rendering is implemented using stream processing software modules 1208 in a multimedia framework 1206 (
The Nizza framework is a software middleware architecture, designed for creating real-time rich media applications. Nizza enables complex applications containing multiple audio and video streams to run reliably in real-time and with low latency. In order to simplify the development of applications that fully leverage the power of modern processors, Nizza provides a framework for decomposing an application's processing into task dependencies, and automating the distribution and execution of those tasks on a symmetric multiprocessor (SMP) machine to obtain improved performance. Nizza allows developers to create applications by connecting media processing modules, such as stream processing modules 1208, into a dataflow graph.
Network receiver software modules 1402(1)-1402(6) simultaneously receive six audio and video streams 1204 (
The compressed video streams generated by network receiver modules 1402(1)-1402(6) are provided to video decompression modules 1406(1)-1406(6), which decompress the streams into YUV-4:2:0 image streams. The YUV-4:2:0 image streams from the video decompression modules 1406(1)-1406(6) are provided to projectors software module 1410. Projectors software module 1410 performs geometric and photometric processing on the six received image streams as described above in Sections II and III, and combines the streams into four processed streams 1222 for projection by four projectors 112.
Software modules 1208 can process streams 1204 from many different sources, including compressed Mpeg2 video streams from prerecorded sources such as DVDs and high-definition video, as well as live video sources compressed by remote Nizza modules or other video codecs. Other video or image sources can also be used to provide streams 1204 to software modules 1208, including Firewire cameras, Jpeg image sequences, BMP image sequences, PPM sequences, as well as other camera interfaces.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4373784 | Nonomura et al. | Feb 1983 | A |
4662746 | Hornbeck | May 1987 | A |
4811003 | Strathman et al. | Mar 1989 | A |
4956619 | Hornbeck | Sep 1990 | A |
5061049 | Hornbeck | Oct 1991 | A |
5083857 | Hornbeck | Jan 1992 | A |
5146356 | Carlson | Sep 1992 | A |
5309241 | Hoagland | May 1994 | A |
5317409 | Macocs | May 1994 | A |
5319744 | Kelly et al. | Jun 1994 | A |
5369739 | Akeley | Nov 1994 | A |
5386253 | Fielding | Jan 1995 | A |
5402184 | O'Grady et al. | Mar 1995 | A |
5490009 | Venkateswar et al. | Feb 1996 | A |
5557353 | Stahl | Sep 1996 | A |
5689283 | Shirochi | Nov 1997 | A |
5712711 | Suzuki | Jan 1998 | A |
5751379 | Markandey et al. | May 1998 | A |
5842762 | Clarke | Dec 1998 | A |
5897191 | Clarke | Apr 1999 | A |
5912773 | Barnett et al. | Jun 1999 | A |
5920365 | Eriksson | Jul 1999 | A |
5953148 | Moseley et al. | Sep 1999 | A |
H001812 | Arcuri | Nov 1999 | H |
5978518 | Oliyide et al. | Nov 1999 | A |
6025951 | Swart et al. | Feb 2000 | A |
6067143 | Tomita | May 2000 | A |
6097854 | Szeliski et al. | Aug 2000 | A |
6104375 | Lam | Aug 2000 | A |
6115022 | Mayer et al. | Sep 2000 | A |
6118584 | Van Berkel et al. | Sep 2000 | A |
6141039 | Poetsch | Oct 2000 | A |
6184969 | Fergason | Feb 2001 | B1 |
6219017 | Shimada et al. | Apr 2001 | B1 |
6222593 | Higurashi et al. | Apr 2001 | B1 |
6239783 | Hill et al. | May 2001 | B1 |
6243055 | Fergason | Jun 2001 | B1 |
6313888 | Tabata | Nov 2001 | B1 |
6317171 | Dewald | Nov 2001 | B1 |
6384816 | Tabata | May 2002 | B1 |
6390050 | Feikus | May 2002 | B2 |
6393145 | Betrisey et al. | May 2002 | B2 |
6456339 | Surati et al. | Sep 2002 | B1 |
6522356 | Watanabe | Feb 2003 | B1 |
6558006 | Ioka | May 2003 | B2 |
6657603 | Demetrescu et al. | Dec 2003 | B1 |
6677956 | Raskar et al. | Jan 2004 | B2 |
6695451 | Yamasaki et al. | Feb 2004 | B1 |
6729733 | Raskar et al. | May 2004 | B1 |
6733136 | Lantz et al. | May 2004 | B2 |
6793350 | Raskar et al. | Sep 2004 | B1 |
6811264 | Raskar et al. | Nov 2004 | B2 |
6814448 | Ioka | Nov 2004 | B2 |
6824271 | Ishii et al. | Nov 2004 | B2 |
6856420 | Pew et al. | Feb 2005 | B1 |
6874420 | Lewis et al. | Apr 2005 | B2 |
6912293 | Korobkin | Jun 2005 | B1 |
6930681 | Raskar et al. | Aug 2005 | B2 |
7006110 | Crisu et al. | Feb 2006 | B2 |
7019713 | Hereld et al. | Mar 2006 | B2 |
7031512 | Ng | Apr 2006 | B2 |
7038727 | Majumder et al. | May 2006 | B2 |
7103203 | Deschamps | Sep 2006 | B2 |
7131733 | Shibano | Nov 2006 | B2 |
7308157 | Safaee-Rad et al. | Dec 2007 | B2 |
7339625 | Matthys et al. | Mar 2008 | B2 |
7372484 | Mouli | May 2008 | B2 |
7483117 | Hirukawa | Jan 2009 | B2 |
7528966 | Matsumoto | May 2009 | B2 |
7574042 | Tsuruoka et al. | Aug 2009 | B2 |
7589729 | Skibak et al. | Sep 2009 | B2 |
20010055025 | Deering et al. | Dec 2001 | A1 |
20020024640 | Ioka | Feb 2002 | A1 |
20020041364 | Ioka | Apr 2002 | A1 |
20020057278 | Bruderlin | May 2002 | A1 |
20030020809 | Gibbon et al. | Jan 2003 | A1 |
20030076325 | Thrasher | Apr 2003 | A1 |
20030090597 | Katoh et al. | May 2003 | A1 |
20040017164 | Belliveau | Jan 2004 | A1 |
20040085477 | Majumder et al. | May 2004 | A1 |
20040169827 | Kubo et al. | Sep 2004 | A1 |
20040184010 | Raskar et al. | Sep 2004 | A1 |
20040184011 | Raskar et al. | Sep 2004 | A1 |
20040184013 | Raskar et al. | Sep 2004 | A1 |
20040239885 | Jaynes et al. | Dec 2004 | A1 |
20040239890 | Starkweather | Dec 2004 | A1 |
20040257540 | Roy et al. | Dec 2004 | A1 |
20050012474 | Belliveau | Jan 2005 | A1 |
20050052623 | Hsiung | Mar 2005 | A1 |
20050271299 | Ajito et al. | Dec 2005 | A1 |
20070195285 | Jaynes et al. | Aug 2007 | A1 |
20070253028 | Widdowson | Nov 2007 | A1 |
20070279600 | Belliveau et al. | Dec 2007 | A1 |
20070291184 | Harville et al. | Dec 2007 | A1 |
20070291185 | Gelb et al. | Dec 2007 | A1 |
20070291189 | Harville | Dec 2007 | A1 |
20080123993 | Widdowson | May 2008 | A1 |
Number | Date | Country |
---|---|---|
1 001 306 | May 2000 | EP |
0007376 | Feb 2000 | WO |
Number | Date | Country | |
---|---|---|---|
20070291189 A1 | Dec 2007 | US |