In the following Detailed Description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” etc., may be used with reference to the orientation of the Figure(s) being described. Because components of embodiments of the present invention can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
Sub-frame generation system 20 includes an image frame buffer 104, a sub-frame generator 108, and a network interface 22. Image display system 40 includes a network interface 42, a control unit 111, projection devices 112A-112D (collectively referred to as projection devices 112), image frame buffers 113A-113D (collectively referred to as frame buffers 113), one or more cameras 122, and calibration unit 124.
Image frame buffer 104 receives and buffers image data 102 to create image frames 106. Image data 102 may comprise any suitable still or video image format with any suitable resolution. For example, image data 102 may be in a High Definition (HD) television 1080p (1920×1080 resolution) format, a digital cinema 2K (2048×1080 resolution) format, or digital cinema 4K (4096×2160 resolution) format. Image frame buffer 104 includes memory for storing image data 102 for one or more image frames 106. Thus, image frame buffer 104 constitutes a database of one or more image frames 106. Examples of image frame buffer 104 include non-volatile memory (e.g., a hard disk drive or other persistent storage device) and may include volatile memory (e.g., random access memory (RAM)).
Sub-frame generator 108 receives and processes image frames 106 to define corresponding image sub-frames 110A-110D (collectively referred to as sub-frames 110) using calibration information provided by image display system 40 and received using network interface 22. As described in additional detail below, the calibration information specifies a configuration of projection devices 112, display surface 116, and one or more cameras 122 in image display system 40.
In the exemplary embodiment of
Sub-frame generator 108 generates sub-frames 110 based on image data in image frames 106. In one embodiment, sub-frame generator 108 generates image sub-frames 110 with a resolution that matches the resolution of projection devices 112, which is less than the resolution of image frames 106 in one embodiment (e.g., XGA format with a resolution of 1024×768). Sub-frames 110 each include a plurality of columns and a plurality of rows of individual pixels representing a subset of an image frame 106.
Sub-frame generator 108 determines appropriate values for sub-frames 110 so that the displayed image 114 produced by the projected sub-frames 110 is close in appearance to how the high-resolution image (e.g., image frame 106) from which sub-frames 110 were derived would appear if displayed directly. Sub-frame generator 108 may determine appropriate values for sub-frames 110 using the embodiments described with reference to
By generating sub-frames 110 using the calibration information from image display system 40, sub-frame generator 108 generates sub-frames 110 such that sub-frames 110 would not display properly on another display system. Accordingly, the display of sub-frames 110 by another display system would likely result in a significant reduction of image quality.
In one embodiment, sub-frame generator 108 generates sub-frames 110 with distortion such that distortion is not visible (i.e., the distortion cancels) when all sub-frames 110 are displayed simultaneously in at least partially overlapping positions using projection devices 112. Sub-frame generator 108 generates sub-frames 110 with distortion such that the distortion is visible when fewer than all sub-frames 110 are displayed simultaneously in at least partially overlapping positions using projection devices 112. Sub-frame generator 108 may generate the distortion by including random or non-random noise (e.g., a pattern such as a moiré pattern) or by including only a subset of image data 102 in each sub-frame 110 (e.g., a grayscale range or a single color). The distortion of sub-frames 110 may form defined patterns, such as moire patterns, such that the patterns are visible when, for example, a single sub-frame 110 with distortion is displayed separately from the remaining sub-frames 110.
Additional information regarding the use of distortion in sub-frames 110 may be found in co-pending U.S. Patent U.S. patent application Ser. No. 11/298,233, filed Dec. 9, 2005, and entitled PROJECTION OF OVERLAPPING SUB-FRAMES ONTO A SURFACE; and U.S. patent application Ser. No. 11/298,190, filed Dec. 9, 2005, and entitled GENERATION OF IMAGE DATA SUBSETS. These applications are incorporated by reference herein.
In one embodiment, sub-frame generator 108 encrypts sub-frames 110 using any suitable encryption technique prior to sub-frames 110 being transmitted to image display system 40. Sub-frame generator 108 may use an encryption key to perform the encryption such that image display system 40 also includes an encryption key that may be use to decrypt sub-frames 110.
In one embodiment, sub-frame generator 108 receives diagnostic information from image display system 40 using network interface 22. As described in additional detail below, the diagnostic information may include any type of information associated with the operating status or condition of components in image display system 40. Sub-frame generator 108 may store the diagnostic information in logs, generate errors associated with the diagnostic information, or otherwise provide notifications to a user regarding the diagnostic information.
In one embodiment, sub-frame generator 108 compresses sub-frames 110 using redundant information in sub-frames 110. By compressing sub-frames 110, sub-frame generator 108 may reduce the size of the memory used to store sub-frames 110.
It will be understood by a person of ordinary skill in the art that functions performed by sub-frame generator 108 may be implemented in hardware, software, firmware, or any combination thereof. The implementation may be via a microprocessor, programmable logic device, or state machine. Components of the present invention may reside in software on one or more computer-readable mediums. The term computer-readable medium as used herein is defined to include any kind of memory, volatile or non-volatile, such as floppy disks, hard disks, CD-ROMs, flash memory, read-only memory, and random access memory.
Sub-frame generator 108 provides sub-frames 110 to network interface 22. Network interface 22 configures or translates sub-frames 110 in accordance with any suitable network protocol or combination of protocols and transmits sub-frames 110 to image display system 40 using one or more network connections 24 to network 30. Network connection 24 may be any suitable set of wired or wireless network connections to network 30.
Network 30 includes any number of wired or wireless network devices (not shown) configured to receive sub-frames 110 from sub-frame generation system 20 using network connections 24 and provide sub-frames 110 to image display system 40 using one or more network connections 44. Network 30 may transmit sub-frames 110 using any suitable network protocol or combination of protocols.
Image display system 40 receives sub-frames 110A-110D using network interface 42. Control unit 111 de-multiplexes sub-frames 110A-110D and stores sub-frames 110A-110D in image frame buffers 113A-113D, respectively, of projection devices 112A-112D, respectively. Control unit 111 decrypts or decompresses sub-frames 110A-110D, as appropriate, prior to storing sub-frames 110A-110D in image frame buffers 113A-113D.
Image frame buffers 113 include memory for storing any number of sub-frames 110. Examples of image frame buffers 113 include non-volatile memory (e.g., a hard disk drive or other persistent storage device) and may include volatile memory (e.g., random access memory (RAM)).
Projection devices 112A-112D access sub-frames 110A-110D from image frame buffers 113A-113D, respectively, and project sub-frames 110A-110D, respectively, onto display surface 116 to produce displayed image 114 for viewing by a user. In one embodiment, projection devices 112 simultaneously or substantially simultaneously project sub-frames 110 onto display surface 116 at overlapping and spatially offset positions to produce displayed image 114. Accordingly, image display system 40 is configured to give the appearance to the human eye of high-resolution displayed images 114 by displaying overlapping and spatially shifted lower-resolution sub-frames 110 from multiple projection devices 112. In one form of the invention, the projection of overlapping and spatially shifted sub-frames 110 gives the appearance of enhanced resolution (i.e., higher resolution than sub-frames 110 themselves).
Also shown in
Calibration unit 124 generates calibration information associated with the display of image 114 on surface 116 by projection devices 112 using calibration images (not shown) captured by one or more cameras 122. The calibration information specifies a configuration of projection devices 112, display surface 116, and one or more cameras 122 to allow sub-frames 110 to be generated by sub-frame generator 108 for the specific configuration of image display system 40. According to one embodiment, the calibration information includes a geometric mapping between each projection device 112 and the reference projector 118 as described in additional detail below with reference to the embodiments of
Calibration unit 124 provides the calibration information to network interface 42. Network interface 42 transmits the calibration information to sub-frame generation system 20 using one or more network connections 44. Network interface 42 configures or translates the calibration information in accordance with any suitable network protocol or combination of protocols and transmits the calibration information to sub-frame generation system 20 using network connection 44. Network connection 44 may be any suitable set of wired or wireless network connections to network 30.
In other embodiments (not shown), calibration unit 124 is included in sub-frame generation system 20. In this embodiment, image display system 40 transmits calibration images captured by one or more cameras 122 as the calibration information across network 30 using network interface 42. Sub-frame generation system 20 receives the calibration images from network 30 using network interface 22. Calibration unit 124 processes the calibration images to produce calibration information and provides the calibration information to sub-frame generator 108 using any suitable connection within sub-frame generation system 20.
In one embodiment, control unit 111 is configured to generate diagnostic information associated with the operation status or condition of control unit 111, projection devices 112, one or more cameras 122, calibration unit 124, and network interface 42. For example, the diagnostic information may indicate that a projector bulb of a projection device 112 has failed. Control unit 111 provides the diagnostic information to network interface 42. Network interface 42 transmits the diagnostic information to sub-frame generation system 20 using network connection 44.
Image display system 40 (e.g., control unit 111 and calibration unit 124) includes any suitable configuration that includes hardware, software, firmware, or a combination of these.
In one embodiment, sub-frame generation system 20 may be configured as a server computer system and image display system 40 may be configured as a client computer system. In this embodiment, image display system 40 is located remotely from sub-frame generation system 20. Accordingly, network 30 may include any suitable wide area network (e.g., the Internet), at least a portion of a switched telephone network, or any other suitable computer network.
A sub-frame data center 150 includes a plurality of sub-frame generation systems 20(1) through 20(N) (collectively referred to as sub-frame generation systems 20). Sub-frame generation systems 20 generate sets of sub-frames 110 for respective image display systems 40(1) through 40(N) (collectively referred to as image display systems 40). Image display systems 40 display the respective sets of sub-frames 110 to form respective displayed images 114(1) through 114(N) (collectively referred to as displayed images 114) on respective display surfaces 116(1) through 116(N) (collectively referred to as display surfaces 116)
Image display systems 40 transmit calibration information to respective sub-frame generation systems 20 as described above with reference to
Sub-frame generation systems 20 transmit the sets of sub-frames 110 to respective image display systems 40 across network 30 using respective network connections 24(1) through 24(N) (collectively referred to as network connections 24). Image display systems 40 receive the sets of sub-frames 110 using respective network connections 44(1) through 44(N) (collectively referred to as network connections 44).
Image display systems 40 transmit the calibration information to respective sub-frame generation systems 20 across network 30 using network connections 44. Sub-frame generation systems 20 receive the calibration information using respective network connections 24.
Sub-frame data center 150 forms a single central location for generating and transmitting set of sub-frames 110. Image display systems 40 may each be remotely located from sub-frame data center 150 in one or more locations.
In
The method of
In
Sub-frame generator 108 transmits the set of sub-frames 110 to image display system 40 across network 30 using network interface 22 as indicated in a block 308. Prior to transmitting set of sub-frames 110, sub-frame generator 108 may distort, encrypt, or compress sub-frames 110 as described in additional detail above.
A determination is made as to whether there is another image frame 106 as indicated in a block 310. If there is not another image frame 106, then the method ends. If there is another image frame 106, then the method repeats the functions of blocks 304 through 310 for the next image frame 106.
In one embodiment, sub-frame generator 108 uses the calibration information received in performing the function of block 302 to perform the function of block 306 for each image frame 106. In other embodiments, sub-frame generator 108 repeats the function of block 302 for each image frame 106 such that sub-frame generator 108 continuously receives calibration information from image display system 40.
In
Image display system 40 receives a set of sub-frames 110 across network 30 using network interface 42 as indicated in a block 404. Control unit 111 receives sub-frames 110 from network interface 42 and stores sub-frames 110 in respective frame buffers 113 of projection devices 112. Control unit 111 may decrypt or decompress sub-frames 110 as appropriate prior to storing sub-frames 110 in frame buffers 113. Image display system 40 displays the set of sub-frames 110 using the set of projection devices 112 as indicated in a block 406. More particularly, projection devices 112 each simultaneously display a respective sub-frame 110 in at least partially overlapping positions.
Image display system 40 optionally transmits diagnostic information associated with the set of projection devices 112 to sub-frame generation system 20 across network 30 using network interface 42 as indicated in a block 410. Control unit 111 generates the diagnostic information continuously or periodically and transmits the diagnostic information to image display system 40. Control unit 111 may transmit the diagnostic information in response to receiving a command from image display system 40.
A determination is made as to whether another set of sub-frames is to be displayed as indicated in a block 412. The determination may be made according to a mode of operation of image display system 40 (e.g., a video mode or a still image mode) or may be made in response to detecting additional sets of sub-frames that are transmitted by sub-frame generation system 20. If another set of sub-frames is not to be displayed, then the method ends. If another set of sub-frames is to be displayed, then the function of blocks 406 and 408 is repeated.
In the embodiment of
Sub-frame 110A is spatially offset from first sub-frame 110B by a predetermined distance. Similarly, sub-frame 110C is spatially offset from first sub-frame 110D by a predetermined distance. In one illustrative embodiment, vertical distance 204 and horizontal distance 206 are each approximately one-half of one pixel.
The display of sub-frames 100B, 110C, and 110D are spatially shifted relative to the display of sub-frame 110A by vertical distance 204, horizontal distance 206, or a combination of vertical distance 204 and horizontal distance 206. As such, pixels 202 of sub-frames 110A, 110B, 110C, and 110D overlap thereby producing the appearance of higher resolution pixels. Sub-frames 110A, 110B, 110C, and 110D may be superimposed on one another (i.e., fully or substantially fully overlap), may be tiled (i.e., partially overlap at or near the edges), or may be a combination of superimposed and tiled. The overlapped sub-frames 110A, 101B, 110C, and 110D also produce a brighter overall image than any of sub-frames 110A, 110B, 110C, or 110D alone.
In other embodiments, sub-frames 110A, 110B, 110C, and 110D may be displayed at other spatial offsets relative to one another and the spatial offsets may vary over spatially, temporally, or any suitable combination of spatially and temporally.
In one embodiment, sub-frames 110 have a lower resolution than image frames 106. Thus, sub-frames 110 are also referred to herein as low-resolution images or sub-frames 110, and image frames 106 are also referred to herein as high-resolution images or frames 106. The terms low resolution and high resolution are used herein in a comparative fashion, and are not limited to any particular minimum or maximum number of pixels.
Sub-frame generator 108 may determine appropriate values for sub-frames 110 using the embodiments described with reference to
In one embodiment, display system 40 produces a superimposed projected output that takes advantage of natural pixel mis-registration to provide a displayed image with a higher resolution than the individual sub-frames 110. In one embodiment, image formation due to multiple overlapped projection devices 112 is modeled using a signal processing model. Optimal sub-frames 110 for each of the component projection devices 112 are estimated by sub-frame generator 108 based on the model, such that the resulting image predicted by the signal processing model is as close as possible to the desired high-resolution image to be projected. In one embodiment, the signal processing model is used to derive values for sub-frames 110 that minimize visual color artifacts that can occur due to offset projection of single-color sub-frames 110.
In one embodiment, sub-frame generator 108 is configured to generate sub-frames 110 based on the maximization of a probability that, given a desired high resolution image, a simulated high-resolution image that is a function of the sub-frame values, is the same as the given, desired high-resolution image. If the generated sub-frames 110 are optimal, the simulated high-resolution image will be as close as possible to the desired high-resolution image. The generation of optimal sub-frames 110 based on a simulated high-resolution image and a desired high-resolution image is described in further detail below with reference to the embodiment of
A. Multiple Color Sub-Frames
Zk=HkDTYk Equation I
where:
The low-resolution sub-frame pixel data (Yk) is expanded with the up-sampling matrix (DT) so that sub-frames 110 (Yk) can be represented on a high-resolution grid. The interpolating filter (Hk) fills in the missing pixel data produced by up-sampling. In the embodiment shown in
In one embodiment, the geometric mapping (Fk) is a floating-point mapping, but the destinations in the mapping are on an integer grid in image 304. Thus, it is possible for multiple pixels in image 302 to be mapped to the same pixel location in image 304, resulting in missing pixels in image 304. To avoid this situation, in one embodiment, during the forward mapping (Fk), the inverse mapping (Fk−1) is also utilized as indicated at 305 in
In another embodiment, the forward geometric mapping or warp (Fk) is implemented directly, and the inverse mapping (Fk−1) is not used. In one form of this embodiment, a scatter operation is performed to eliminate missing pixels. That is, when a pixel in image 302 is mapped to a floating point location in image 304, some of the image data for the pixel is essentially scattered to multiple pixels neighboring the floating point location in image 304. Thus, each pixel in image 304 may receive contributions from multiple pixels in image 302, and each pixel in image 304 is normalized based on the number of contributions it receives.
A superposition/summation of such warped images 304 from all of the component projection devices 112 forms a hypothetical or simulated high-resolution image 306 ({circumflex over (X)}, also referred to as X-hat herein) in reference projector frame buffer 120, as represented in the following Equation II:
where:
If the simulated high-resolution image 306 (X-hat) in reference projector frame buffer 120 is identical to a given (desired) high-resolution image 308 (X), the system of component low-resolution projection devices 112 would be equivalent to a hypothetical high-resolution projector placed at the same location as hypothetical reference projector 118 and sharing its optical path. In one embodiment, the desired high-resolution images 308 are the high-resolution image frames 106 received by sub-frame generator 108.
In one embodiment, the deviation of the simulated high-resolution image 306 (X-hat) from the desired high-resolution image 308 (X) is modeled as shown in the following Equation III:
X={circumflex over (X)}+η Equation III
where:
As shown in Equation III, the desired high-resolution image 308 (X) is defined as the simulated high-resolution image 306 (X-hat) plus η, which in one embodiment represents zero mean white Gaussian noise.
The solution for the optimal sub-frame data (Yk*) for sub-frames 110 is formulated as the optimization given in the following Equation IV:
where:
Thus, as indicated by Equation IV, the goal of the optimization is to determine the sub-frame values (Yk) that maximize the probability of X-hat given X. Given a desired high-resolution image 308 (X) to be projected, sub-frame generator 108 determines the component sub-frames 110 that maximize the probability that the simulated high-resolution image 306 (X-hat) is the same as or matches the “true” high-resolution image 308 (X).
Using Bayes rule, the probability P(X-hat|X) in Equation IV can be written as shown in the following Equation V:
where:
The term P(X) in Equation V is a known constant. If X-hat is given, then, referring to Equation III, X depends only on the noise term, η, which is Gaussian. Thus, the term P(X|X-hat) in Equation V will have a Gaussian form as shown in the following Equation VI:
where:
To provide a solution that is robust to minor calibration errors and noise, a “smoothness” requirement is imposed on X-hat. In other words, it is assumed that good simulated images 306 have certain properties. The smoothness requirement according to one embodiment is expressed in terms of a desired Gaussian prior probability distribution for X-hat given by the following Equation VII:
where:
In another embodiment, the smoothness requirement is based on a prior Laplacian model, and is expressed in terms of a probability distribution for X-hat given by the following Equation VIII:
where:
The following discussion assumes that the probability distribution given in Equation VII, rather than Equation VIII, is being used. As will be understood by persons of ordinary skill in the art, a similar procedure would be followed if Equation VIII were used. Inserting the probability distributions from Equations VI and VII into Equation V, and inserting the result into Equation IV, results in a maximization problem involving the product of two probability distributions (note that the probability P(X) is a known constant and goes away in the calculation). By taking the negative logarithm, the exponents go away, the product of the two probability distributions becomes a sum of two probability distributions, and the maximization problem given in Equation IV is transformed into a function minimization problem, as shown in the following Equation IX:
where:
The function minimization problem given in Equation IX is solved by substituting the definition of X-hat from Equation II into Equation IX and taking the derivative with respect to Yk, which results in an iterative algorithm given by the following Equation X:
Y
k
(n+1)
=Y
k
(n)
−Θ{DH
k
T
F
k
T└({circumflex over (X)}(n)−X)+β2∇2{circumflex over (X)}(n) Equation X
where:
Equation X may be intuitively understood as an iterative process of computing an error in the hypothetical reference projector coordinate system and projecting it back onto the sub-frame data. In one embodiment, sub-frame generator 108 is configured to generate sub-frames 110 in real-time using Equation X. The generated sub-frames 110 are optimal in one embodiment because they maximize the probability that the simulated high-resolution image 306 (X-hat) is the same as the desired high-resolution image 308 (X), and they minimize the error between the simulated high-resolution image 306 and the desired high-resolution image 308. Equation X can be implemented very efficiently with conventional image processing operations (e.g., transformations, down-sampling, and filtering). The iterative algorithm given by Equation X converges rapidly in a few iterations and is very efficient in terms of memory and computation (e.g., a single iteration uses two rows in memory; and multiple iterations may also be rolled into a single step). The iterative algorithm given by Equation X is suitable for real-time implementation, and may be used to generate optimal sub-frames 110 at video rates, for example.
To begin the iterative algorithm defined in Equation X, an initial guess, Yk(0), for sub-frames 110 is determined. In one embodiment, the initial guess for sub-frames 110 is determined by texture mapping the desired high-resolution frame 308 onto sub-frames 110. In one embodiment, the initial guess is determined from the following Equation XI:
Yk(0)=DBkFkTX Equation XI
where:
Thus, as indicated by Equation XI, the initial guess (Yk(0)) is determined by performing a geometric transformation (FkT) on the desired high-resolution frame 308 (X), and filtering (Bk) and down-sampling (D) the result. The particular combination of neighboring pixels from the desired high-resolution frame 308 that are used in generating the initial guess (Yk(0)) will depend on the selected filter kernel for the interpolation filter (Bk).
In another embodiment, the initial guess, Yk(0), for sub-frames 110 is determined from the following Equation XII
Yk(0)=DFkTX Equation XII
where:
Equation XII is the same as Equation XI, except that the interpolation filter (Bk) is not used.
Several techniques are available to determine the geometric mapping (Fk) between each projection device 112 and hypothetical reference projector 118, including manually establishing the mappings, or using camera 122 and calibration unit 124 to automatically determine the mappings. In one embodiment, if camera 122 and calibration unit 124 are used, the geometric mappings between each projection device 112 and camera 122 are determined by calibration unit 124. These projector-to-camera mappings may be denoted by Tk, where k is an index for identifying projection devices 112. Based on the projector-to-camera mappings (Tk), the geometric mappings (Fk) between each projection device 112 and hypothetical reference projector 118 are determined by calibration unit 124, and provided to sub-frame generator 108. For example, in a display system 40 with two projection devices 112A and 112B, assuming the first projection device 112A is hypothetical reference projector 118, the geometric mapping of the second projection device 112B to the first (reference) projection device 112A can be determined as shown in the following Equation XIII:
F
2
=T
2
T
1
−1 Equation XIII
where:
In one embodiment, the geometric mappings (Fk) are determined once by calibration unit 124, and provided to sub-frame generator 108. In another embodiment, calibration unit 124 continually determines (e.g., once per frame 106) the geometric mappings (Fk), and continually provides updated values for the mappings to sub-frame generator 108.
B. Single Color Sub-Frames
In another embodiment illustrated by the embodiment of
Zik=HiDiTYik Equation XIV
where:
The low-resolution sub-frame pixel data (Yik) is expanded with the up-sampling matrix (DiT) so that sub-frames 110 (Yik) can be represented on a high-resolution grid. The interpolating filter (Hi) fills in the missing pixel data produced by up-sampling. In the embodiment shown in
In one embodiment, the geometric mapping (Fik) is a floating-point mapping, but the destinations in the mapping are on an integer grid in image 404. Thus, it is possible for multiple pixels in image 402 to be mapped to the same pixel location in image 404, resulting in missing pixels in image 404. To avoid this situation, in one embodiment, during the forward mapping (Fik), the inverse mapping (Fik−1) is also utilized as indicated at 405 in
In another embodiment, the forward geometric mapping or warp (Fk) is implemented directly, and the inverse mapping (Fk−1) is not used. In one form of this embodiment, a scatter operation is performed to eliminate missing pixels. That is, when a pixel in image 402 is mapped to a floating point location in image 404, some of the image data for the pixel is essentially scattered to multiple pixels neighboring the floating point location in image 404. Thus, each pixel in image 404 may receive contributions from multiple pixels in image 402, and each pixel in image 404 is normalized based on the number of contributions it receives.
A superposition/summation of such warped images 404 from all of the component projection devices 112 in a given color plane forms a hypothetical or simulated high-resolution image (X-hati) for that color plane in reference projector frame buffer 120, as represented in the following Equation XV:
where:
A hypothetical or simulated image 406 (X-hat) is represented by the following Equation XVI:
{circumflex over (X)}=[{circumflex over (X)}1{circumflex over (X)}2 . . . {circumflex over (X)}N]T Equation XVI
where:
If the simulated high-resolution image 406 (X-hat) in reference projector frame buffer 120 is identical to a given (desired) high-resolution image 408 (X), the system of component low-resolution projection devices 112 would be equivalent to a hypothetical high-resolution projector placed at the same location as hypothetical reference projector 118 and sharing its optical path. In one embodiment, the desired high-resolution images 408 are the high-resolution image frames 106 received by sub-frame generator 108.
In one embodiment, the deviation of the simulated high-resolution image 406 (X-hat) from the desired high-resolution image 408 (X) is modeled as shown in the following Equation XVII:
X={circumflex over (X)}+η Equation XVII
where:
As shown in Equation XVII, the desired high-resolution image 408 (X) is defined as the simulated high-resolution image 406 (X-hat) plus η, which in one embodiment represents zero mean white Gaussian noise.
The solution for the optimal sub-frame data (Yik*) for sub-frames 110 is formulated as the optimization given in the following Equation XVIII:
where:
Thus, as indicated by Equation XVIII, the goal of the optimization is to determine the sub-frame values (Yik) that maximize the probability of X-hat given X. Given a desired high-resolution image 408 (X) to be projected, sub-frame generator 108 determines the component sub-frames 110 that maximize the probability that the simulated high-resolution image 406 (X-hat) is the same as or matches the “true” high-resolution image 408 (X).
Using Bayes rule, the probability P(X-hat|X) in Equation XVIII can be written as shown in the following Equation XIX:
where:
The term P(X) in Equation XIX is a known constant. If X-hat is given, then, referring to Equation XVII, X depends only on the noise term, η, which is Gaussian. Thus, the term P(X|X-hat) in Equation XIX will have a Gaussian form as shown in the following Equation XX:
where:
To provide a solution that is robust to minor calibration errors and noise, a “smoothness” requirement is imposed on X-hat. In other words, it is assumed that good simulated images 406 have certain properties. For example, for most good color images, the luminance and chrominance derivatives are related by a certain value. In one embodiment, a smoothness requirement is imposed on the luminance and chrominance of the X-hat image based on a “Hel-Or” color prior model, which is a conventional color model known to those of ordinary skill in the art. The smoothness requirement according to one embodiment is expressed in terms of a desired probability distribution for X-hat given by the following Equation XXI:
where:
In another embodiment, the smoothness requirement is based on a prior Laplacian model, and is expressed in terms of a probability distribution for X-hat given by the following Equation XXII:
where:
The following discussion assumes that the probability distribution given in Equation XXI, rather than Equation XXII, is being used. As will be understood by persons of ordinary skill in the art, a similar procedure would be followed if Equation XXII were used. Inserting the probability distributions from Equations XX and XXI into Equation XIX, and inserting the result into Equation XVIII, results in a maximization problem involving the product of two probability distributions (note that the probability P(X) is a known constant and goes away in the calculation). By taking the negative logarithm, the exponents go away, the product of the two probability distributions becomes a sum of two probability distributions, and the maximization problem given in Equation V is transformed into a function minimization problem, as shown in the following Equation XXIII:
where:
The function minimization problem given in Equation XXIII is solved by substituting the definition of X-hati from Equation XV into Equation XXIII and taking the derivative with respect to Yik, which results in an iterative algorithm given by the following Equation XXIV:
where:
Equation XXIV may be intuitively understood as an iterative process of computing an error in the hypothetical reference projector coordinate system and projecting it back onto the sub-frame data. In one embodiment, sub-frame generator 108 is configured to generate sub-frames 110 in real-time using Equation XXIV. The generated sub-frames 110 are optimal in one embodiment because they maximize the probability that the simulated high-resolution image 406 (X-hat) is the same as the desired high-resolution image 408 (X), and they minimize the error between the simulated high-resolution image 406 and the desired high-resolution image 408. Equation XXIV can be implemented very efficiently with conventional image processing operations (e.g., transformations, down-sampling, and filtering). The iterative algorithm given by Equation XXIV converges rapidly in a few iterations and is very efficient in terms of memory and computation (e.g., a single iteration uses two rows in memory; and multiple iterations may also be rolled into a single step). The iterative algorithm given by Equation XXIV is suitable for real-time implementation, and may be used to generate optimal sub-frames 110 at video rates, for example.
To begin the iterative algorithm defined in Equation XXIV, an initial guess, Yik(0), for sub-frames 110 is determined. In one embodiment, the initial guess for sub-frames 110 is determined by texture mapping the desired high-resolution frame 408 onto sub-frames 110. In one embodiment, the initial guess is determined from the following Equation XXV:
Yik(0)=DiBiFikTXi Equation XXV
where:
Thus, as indicated by Equation XXV, the initial guess (Yik(0)) is determined by performing a geometric transformation (FikT) on the ith color plane of the desired high-resolution frame 408 (Xi), and filtering (Bi) and down-sampling (Di) the result. The particular combination of neighboring pixels from the desired high-resolution frame 408 that are used in generating the initial guess (Yik(0)) will depend on the selected filter kernel for the interpolation filter (Bi).
In another embodiment, the initial guess, Yik(0), for sub-frames 110 is determined from the following Equation XXVI:
Yik(0)=DiFikTXi Equation XXVI
where:
Equation XXVI is the same as Equation XXV, except that the interpolation filter (Bk) is not used.
Several techniques are available to determine the geometric mapping (Fik) between each projection device 112 and hypothetical reference projector 118, including manually establishing the mappings, or using camera 122 and calibration unit 124 to automatically determine the mappings. In one embodiment, if camera 122 and calibration unit 124 are used, the geometric mappings between each projection device 112 and camera 122 are determined by calibration unit 124. These projector-to-camera mappings may be denoted by Tk, where k is an index for identifying projection devices 112. Based on the projector-to-camera mappings (Tk), the geometric mappings (Fk) between each projection device 112 and hypothetical reference projector 118 are determined by calibration unit 124, and provided to sub-frame generator 108. For example, in a display system 40 with two projection devices 112A and 112B, assuming the first projection device 112A is hypothetical reference projector 118, the geometric mapping of the second projection device 112B to the first (reference) projection device 112A can be determined as shown in the following Equation XXVII:
F
2
=T
2
T
1
−1 Equation XXVII
where:
In one embodiment, the geometric mappings (Fik) are determined once by calibration unit 124, and provided to sub-frame generator 108. In another embodiment, calibration unit 124 continually determines (e.g., once per frame 106) the geometric mappings (Fik), and continually provides updated values for the mappings to sub-frame generator 108.
One embodiment provides an image display system 40 with multiple overlapped low-resolution projection devices 112 coupled with an efficient real-time (e.g., video rates) image processing algorithm for generating sub-frames 110. In one embodiment, multiple low-resolution, low-cost projection devices 112 are used to produce high resolution images at high lumen levels, but at lower cost than existing high-resolution projection systems, such as a single, high-resolution, high-output projector. One embodiment provides a scalable image display system 40 that can provide virtually any desired resolution, brightness, and color, by adding any desired number of component projection devices 112 to the system 40.
In some existing display systems, multiple low-resolution images are displayed with temporal and sub-pixel spatial offsets to enhance resolution. There are some important differences between these existing systems and embodiments described herein. For example, in one embodiment, there is no need for circuitry to offset the projected sub-frames 110 temporally. In one embodiment, sub-frames 110 from the component projection devices 112 are projected “in-sync”. As another example, unlike some existing systems where all of the sub-frames go through the same optics and the shifts between sub-frames are all simple translational shifts, in one embodiment, sub-frames 110 are projected through the different optics of the multiple individual projection devices 112. In one embodiment, the signal processing model that is used to generate optimal sub-frames 110 takes into account relative geometric distortion among the component sub-frames 110, and is robust to minor calibration errors and noise.
It can be difficult to accurately align projectors into a desired configuration. In one embodiment, regardless of what the particular projector configuration is, even if it is not an optimal alignment, sub-frame generator 108 determines and generates optimal sub-frames 110 for that particular configuration.
Algorithms that seek to enhance resolution by offsetting multiple projection elements have been previously proposed. These methods may assume simple shift offsets between projectors, use frequency domain analyses, and rely on heuristic methods to compute component sub-frames. In contrast, one form of the embodiments described herein utilize an optimal real-time sub-frame generation algorithm that explicitly accounts for arbitrary relative geometric distortion (not limited to homographies) between the component projection devices 112, including distortions that occur due to a display surface that is non-planar or has surface non-uniformities. One embodiment generates sub-frames 110 based on a geometric relationship between a hypothetical high-resolution hypothetical reference projector at any arbitrary location and each of the actual low-resolution projection devices 112, which may also be positioned at any arbitrary location.
In one embodiment, system 40 includes multiple overlapped low-resolution projection devices 112, with each projection device 112 projecting a different colorant to compose a full color high-resolution image on the display surface with minimal color artifacts due to the overlapped projection. By imposing a color-prior model via a Bayesian approach as is done in one embodiment, the generated solution for determining sub-frame values minimizes color aliasing artifacts and is robust to small modeling errors.
Using multiple off the shelf projection devices 112 in system 40 allows for high resolution. However, if the projection devices 112 include a color wheel, which is common in existing projectors, the system 40 may suffer from light loss, sequential color artifacts, poor color fidelity, reduced bit-depth, and a significant tradeoff in bit depth to add new colors. One embodiment described herein eliminates the need for a color wheel, and uses in its place, a different color filter for each projection device 112. Thus, in one embodiment, projection devices 112 each project different single-color images. By not using a color wheel, segment loss at the color wheel is eliminated, which could be up to a 30% loss in efficiency in single chip projectors. One embodiment increases perceived resolution, eliminates sequential color artifacts, improves color fidelity since no spatial or temporal dither is required, provides a high bit-depth per color, and allows for high-fidelity color.
Image display system 40 is also very efficient from a processing perspective since, in one embodiment, each projection device 112 only processes one color plane. Thus, each projection device 112 reads and renders only one-third (for RGB) of the full color data.
In one embodiment, image display system 40 is configured to project images that have a three-dimensional (3D) appearance. In 3D image display systems, two images, each with a different polarization, are simultaneously projected by two different projectors. One image corresponds to the left eye, and the other image corresponds to the right eye. Conventional 3D image display systems typically suffer from a lack of brightness. In contrast, with one embodiment, a first plurality of the projection devices 112 may be used to produce any desired brightness for the first image (e.g., left eye image), and a second plurality of the projection devices 112 may be used to produce any desired brightness for the second image (e.g., right eye image). In another embodiment, image display system 40 may be combined or used with other display systems or display techniques, such as tiled displays.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.
This application is related to U.S. patent application Ser. No. 11/080,583, filed Mar. 15, 2005, and entitled PROJECTION OF OVERLAPPING SUB-FRAMES ONTO A SURFACE; and U.S. patent application Ser. No. 11/080,223, filed Mar. 15, 2005, and entitled PROJECTION OF OVERLAPPING SINGLE-COLOR SUB-FRAMES ONTO A SURFACE. These applications are incorporated by reference herein.