This application is related to U.S. patent application Ser. No. 11/080,583, filed Mar. 15, 2005, and entitled PROJECTION OF OVERLAPPING SUB-FRAMES ONTO A SURFACE; and U.S. patent application Ser. No. 11/080,223, filed Mar. 15, 2005, and entitled PROJECTION OF OVERLAPPING SINGLE-COLOR SUB-FRAMES ONTO A SURFACE. These applications are incorporated by reference herein.
In display systems, such as digital light processor (DLP) systems and liquid crystal display (LCD) systems, it is often desirable to include information in displayed images that is usable by the display system or another electronic device. With many display systems, however, information that is added into displayed images for use by display systems may be seen by a viewer. The information may detract from the viewer's enjoyment of the displayed images. It would be desirable to be able to include information in visual images formed by a display system where the information is imperceptible by a viewer.
According to one embodiment, a method including modifying at least a first pixel value in a first portion of a first image frame to create a first difference between the first pixel value in the first image and a first pixel value in a first portion of a second image frame that, at least in part, represents a first code and providing the first image frame and the second image frame to a display device for display at different times is provided.
In the following Detailed Description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” etc., may be used with reference to the orientation of the Figure(s) being described. Because components of embodiments of the present invention can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
As described herein, a system and method for coding image frames is provided. The system and method contemplate forming functions based at least in part on color differences in visual content by modifying content in corresponding portions of images frames and displaying the image frames with the modified content. Images are captured of the modified content in the displayed images, and the color differencing codes are detected by searching for the color differencing codes in the captured images. The color differencing codes may be formed in the image frames so that they are imperceptible or substantially visually imperceptible by a viewer of the displayed images.
The color differencing codes may be used for geometric calibration within a display system. The color differencing codes may also be used by other processing systems to perform other suitable functions such as forensic marking or watermarking content.
I. Color Differencing Codes in Images Frames for Display in Non-Overlapping Images
Image display system 10A processes image data 12 into a set of image frames 16(1)-16(M), where M is an integer that is greater than or equal to one, and generates corresponding displayed images 24(1)-24(M). Image data 12 may be read from a storage media (e.g., a hard disk drive or a DVD) (not shown), streamed over a network (not shown), received from an image capture source (e.g., a camera) (not shown), or otherwise provided to image display system 10A. An image frame buffer 14 receives and buffers image data 12 to create image frames 16. An encode unit 18 processes image frames 16(1)-16(M) from buffer 14 to define corresponding encoded frames 20(1)-20(M) with encoded portions 21(1)-21(M) and provides encoded frames 20(1)-20(M) to a display device 22 across a connection 19.
Display device 22 receives encoded frames 20(1)-20(M) and stores encoded frames 20(1)-20(M) in a frame buffer (not shown). Display device 22 displays encoded frames 20(1)-20(M) with corresponding encoded portions 21(1)-21(M) onto a display surface (not shown) to produce displayed images 24(1)-24(M) with corresponding displayed encoded portions 25(1)-25(M) for viewing by a user.
Display system 10A includes at least one camera 32 configured to capture images 34(1)-34(M) to include encoded portions 35(1)-35(M) that correspond to encoded portions 25(1)-25(M) in displayed images 24(1)-24(M) on the display surface. Camera 32 includes any suitable image capture device or devices configured to capture at least encoded portions 25 of displayed images 24 from the display surface. Camera 32 may include a single sensor in one embodiment.
A decode unit 36 analyzes encoded portions 35 of captured images 34 to decode information that was encoded into encoded portions 21 of encoded frames 20 by encode unit 18. Decode unit 36 optionally provides the decoded information to display device 22 as indicated by an optional connection 38 and/or an optional processing system 42 as indicated by an optional connection 40. Display device 22, decode unit 36, and/or processing system 42 may perform operations based on the content of the decoded information as described in additional detail below.
The process of encoding information into frames 20 will now be described with reference to
Encode unit 18 encodes information into image frames 20 by forming one or more color differencing codes in encoded frames 20. Each code is encoded as difference between corresponding pixel values in at least two portions 21 of at least two encoded frames 20. Depending on a coding scheme that is implemented, the magnitude of the difference may be constant or may vary between different color channels, different pixel locations, and/or different time instances. The predetermined magnitudes may be determined or adjusted empirically or statistically by accumulating a standard deviation map (not shown) for each color channel from known patterns that are displayed by display device 22 and captured by camera 32.
As an example, encode unit 18 may encode a code as a difference between a pixel value in portion 21(1) of encoded frame 20(1) and a corresponding pixel value in portion 21(2) of encoded frame 20(2). In one embodiment, encode unit 18 forms the difference into one of three possible states: positive, negative, or neutral. The difference may be positive (i.e., the pixel value in portion 21(1) is greater than the pixel value in portion 21(2) by a predetermined magnitude), negative (i.e., the pixel value in portion 21(1) is less than the pixel value in portion 21(2) by a predetermined magnitude), or neutral (i.e., the pixel value in portion 21(1) is approximately equal to or within a predetermined range of the pixel value in portion 21(2)).
In other embodiments, additional thresholds may also be defined so that the difference may represent more than three states. For example, encode unit 18 may encode the difference to be strongly positive (i.e., the pixel value in portion 21(1) is greater than the pixel value in portion 21(2) by a first predetermined magnitude), weakly positive (i.e., the pixel value in portion 21(1) is greater than the pixel value in portion 21(2) by a second predetermined magnitude that is less than the first predetermined magnitude), neutral, weakly negative, or strongly negative by using two positive predetermined magnitudes and two negative predetermined magnitudes.
Portions 21 that encode each code may each be a single pixel, a contiguous set of two or more pixels, or a disjointed set of two or more pixels in encoded frames 20. A set of two or more corresponding portions 21 that encodes a code is configured such that each portion 21 is displayed at the same location on a display surface (not shown) (i.e., encoded portions 25 in displayed images 24) at different times when displayed by display device 22. The set of portions 21 may be included in a continuous set of encoded frames 20 (i.e., frames 20 that are adjacent to one another in time such as frames 20(1) and 20(2)) or in a discontinuous set of encoded frames 20 (i.e., frames 20 that are not adjacent to one another in time such as frames 20(1) and 20(3)).
Accordingly, each code may be decoded by examining the difference between encoded portions 25 in at least two displayed images 24 where encoded portions 25 are displayed at the same location on a display surface by displayed device 22. In the example above, a code may be decoded by examining the difference between encoded portion 25(1), which corresponds to encoded portion 21(1), and encoded portion 25(2), which corresponds to encoded portion 21(2).
In
Encode unit 18 modifies the pixel values in encoded portions 21 so that the difference between the modified pixel values and the original pixel values is sufficient to allow for detection of the code by decode unit 36 but is visually imperceptible or substantially visually imperceptible by a viewer when encoded frames 20 are displayed by display device 22. To do so, encode unit 18 modifies the pixel values so that the modified values of one or more portions 21 are within a range of pixel values that are indistinguishable or substantially indistinguishable to a viewer. In one embodiment, the range of values may be larger for high frequency content than low frequency content to ensure that the codes are visually imperceptible by a viewer. Encode unit 18 may account for pixel values in portions 21 that are a zero (i.e., minimum) or a saturation (i.e., maximum) value or near a zero or saturation value by modifying pixel values in corresponding portions 21 that are non-zero or not at or near the saturation value. Encode unit 18 may also account for pixel values in portions 21 that are a zero or a saturation value or by modifying both the zero or saturation value and pixel values in corresponding portions 21.
In one embodiment illustrated in
In the example of
To encode portions 21(N) and 21(N+P), encode unit 18 modifies the r, g, and b values of portion 21(N) and/or the r′, g′, and b′ values of portion 21(N+P) so that a difference, ΔR, between pixel values r′ and r is positive (+), negative (−), or neutral (0), a difference, ΔG, between pixel values g′ and g is positive (+), negative (−), or neutral (0), and a difference, ΔB, between pixel values b′ and b is positive (+), negative (−), or neutral (0).
By using each of the three color channels in this embodiment, encode unit 18 may encode one of 26 possible codes using the possible state combinations of ΔR, ΔG, and ΔB (i.e., [(3 ΔR states)×(3 ΔG states)×(3 ΔB states)]−[1 state that may be difficult to detect where ΔR=0, ΔG=0, and ΔB=0]) as shown in a possible states graph 72 and indicated by an arrow 74. For example, a first code may be encoded as the combination of ΔR=+, ΔG=+, and ΔB=+, and a second code may be encoded as the combination of ΔR=+, ΔG=−, and ΔB=0. In other embodiments, encode unit 18 may form codes using fewer than all three color channels.
Since the human visual system is sensitive to even subtle changes in intensity or luminance, it may be preferred to preserve this component and operate only in the less sensitive chrominance space. In one embodiment illustrated in
In the example of
To encode portions 21(N) and 21(N+P), encode unit 18 modifies the x, y, and z values of portion 21(N) and/or the x′, y′, and z′values of portion 21(N+P) so that a difference, ΔX, between chrominance values x′ and x is positive (+), negative (−), or neutral (0) and a difference, ΔZ, between chrominance values z′ and z is positive (+), negative (−), or neutral (0).
By using both of the chrominance channels in this embodiment, encode unit 18 may encode one of 8 possible codes using the possible state combinations of ΔX and ΔZ (i.e., [(3 ΔX states)×(3 ΔZ states]−[1 indeterminate state (i.e., ΔX=0 and ΔZ=0)]) as shown in a possible states graph 82 and indicated by an arrow 84. For example, a first code may be encoded as the combination of ΔX=+ and ΔZ=+, and a second code may be encoded as the combination of ΔX=− and ΔZ=0. In other embodiments, encode unit 18 may form codes using only one chrominance channel.
In other embodiments, color spaces other than the red, green, and blue color space and luminance-chrominance color spaces may be used to encode color differencing codes.
In each of the above examples in
Encode unit 18 may also be configured to encode more than one code into each set of encoded frames 20. Encode unit 18 may further be configured to encode a single code using two or more portions 21 in each encoded frame 20. These embodiments are illustrated with reference to
In one embodiment, encode unit 18 encodes a first code into portions 21(N)A and 21(N+P)A and a second, unrelated code into portions 21(N)B and 21 (N+P)B using one or more of RGB or chrominance color channels as described above with reference to
In another embodiment, encode unit 18 encodes a first part of a code into portions 21(N)A and 21(N+P)A and a second part of the code into portions 21(N)B and 21(N+P)B using one or more of RGB or chrominance color channels as described above with reference to
Although shown as disjointed in the example of
In one embodiment, encode unit 18 forms overlapping spatial groupings (e.g., 2×2 regions) of color differencing codes in portions 21 of encoded frame 20. Where 2×2 regions are used, encode unit 18 may form 358,800 (i.e., 26!/(26−4)!) unique codes using three color channels (e.g., red, green, and blue color channels) or 1,680 (i.e., 8!/(8−4)!) unique codes using two color channels (i.e., X and Z chrominance channels). The spatial groupings may be rotation independent and/or phase dependent.
Subsequent to forming color differencing codes in encoded frames 20, encode unit 18 provides encoded frames 20 to display device 22 for display at different times as indicated in a block 54. Display device 22 successively displays encoded frames 20 to form successively displayed images 24 on a display surface (not shown). Displayed images 24 include encoded portions 25 that correspond to encoded portions 21 of encoded frames 20.
The process of decoding information from frames 34 will now be described with reference to
Decode unit 36 decodes information from captured frames 34 by detecting one or more color differencing codes in two or more captured frames 34. Camera 32 captures captured frames 34 to include displayed portions 25 of displayed images 24 that correspond to encoded portions 21 of encoded frames 20. Camera 32 may also be precalibrated so that decode unit 36 may detect the color differencing codes at least in part from the spectral properties of the camera sensor (not shown) of camera 32.
As described above, each code is encoded as difference between corresponding pixel values in at least two portions 21 of at least two encoded frames 20 and the magnitude of the difference may be constant or may vary between different color channels, different pixel locations, and/or different time instances. Decode unit 36 includes or otherwise accesses decoding information (not shown) that indicates the magnitude of the difference of the color differencing codes for each color channel, pixel location, and/or different time instance for captured frames 34. The decoding information may or may not specify regions of captured frames 34 for decode unit 36 to examine to detect color differencing codes.
As an example, decode unit 36 may decode a code from a difference between a pixel value in portion 35(1) of captured frame 34(1) and a corresponding pixel value in portion 35(2) of captured frame 34(2). In one embodiment, the difference may be positive (i.e., the pixel value in portion 35(1) is greater than the pixel value in portion 35(2) by a predetermined magnitude), negative (i.e., the pixel value in portion 35(1) is less than the pixel value in portion 35(2) by a predetermined magnitude), or neutral (i.e., the pixel value in portion 35(1) is approximately equal to or within a predetermined range of the pixel value in portion 35(2)).
Portions 35 that include each code may each be a single pixel, a contiguous set of two or more pixels, or a disjointed set of two or more pixels in captured frames 34. A set of two or more corresponding portions 35 that include a code may be included in a continuous set of captured frames 34 (i.e., frames 34 that are adjacent to one another in time such as frames 34(1) and 34(2)) or in a discontinuous set of captured frames 34 (i.e., frames 34 that are not adjacent to one another in time such as frames 34(1) and 34(3)).
In
Decode unit 36 decodes one or more color differencing codes from pixel values in a portion 35 of the first captured frame 34 and a portion 35 of the second captured frame 34 as indicated in a block 66. Decode unit 36 identifies color differencing codes in captured frames 34 by detecting positive, negative, or minimal differences between corresponding pixel values in captured frames 34 that are equal to or exceed a predetermined magnitude for a pixel location or locations, a color channel, or time instance. Decode unit 36 determines whether each pixel value in one captured image 34 is greater than or less than a corresponding pixel value in another captured image 34 by a predetermined magnitude or equal to the corresponding pixel value in another captured image 34. Where the differences between corresponding pixel values are approximately equal to a predetermined magnitude or are close to zero, decode unit 36 identifies a color differencing code.
In one embodiment illustrated in
In the example of
To decode portions 35(N) and 35(N+P), decode unit 36 determines whether a difference, ΔR, between pixel values r′ and r exceeds a predetermined positive or negative threshold (i.e., + or −) or is approximately zero (i.e., 0), a difference, ΔG, between pixel values g′ and g exceeds a predetermined positive or negative threshold (i.e., + or −) or is approximately zero (i.e., 0), and a difference, ΔB, between pixel values b′ and b exceeds a predetermined positive or negative threshold (i.e., + or −) or is approximately zero (i.e., 0).
From each of the three color channels in this embodiment, decode unit 36 may decode one of 26 possible codes using the possible state combinations of ΔR, ΔG, and ΔB (i.e., [(3 ΔR states)×(3 ΔG states)×(3 ΔB states)]−[1 state that may be difficult to detect where ΔR=0, ΔG=0, and ΔB=0]) as shown in possible states graph 72 in
In one embodiment illustrated in
In the example of
To decode portions 35(N) and 35(N+P), decode unit 36 determines whether a difference, ΔX, between chrominance values x′ and x exceeds a predetermined positive or negative threshold (i.e., + or −) or is approximately zero (i.e., 0) and a difference, ΔZ, between chrominance values z′ and z exceeds a predetermined positive or negative threshold (i.e., + or −) or is approximately zero (i.e., 0).
By using both of the chrominance channels in this embodiment, decode unit 36 may decode one of 8 possible codes using the possible state combinations of ΔX and ΔZ (i.e., [(3 ΔX states)×(3 ΔZ states]−[1 indeterminate state (i.e., ΔX=0 and ΔZ=0)]) as shown in possible states graph 82 in
In each of the above examples in
Decode unit 36 may also be configured to decode more than one code from each set of captured frames 34. Decode unit 36 may further be configured to decode a single code from two or more portions 35 in each captured frame 34. These embodiments are illustrated with reference to
In one embodiment, decode unit 36 decodes a first code from portions 35(N)A and 35(N+P)A and a second, unrelated code from portions 35(N)B and 35(N+P)B using one or more of RGB or chrominance color channels as described above with reference to
In another embodiment, decode unit 36 decodes a first part of a code from portions 35(N)A and 35(N+P)A and a second part of the code from portions 35(N)B and 35(N+P)B using one or more of RGB or chrominance color channels as described above with reference to
Although shown as disjointed in the example of
In one embodiment, decode unit 36 decodes color differencing codes from overlapping spatial groupings (e.g., 2×2 regions) of portions 35 of captured frames 35. Where 2×2 regions are used, decode unit 36 may detect 358,800 (i.e., 26!/(26−4)!) unique codes using three color channels (e.g., red, green, and blue color channels) or 1,680 (i.e., 8!/(8−4)!) unique codes using two color channels (i.e., X and Z chrominance channels).
In one embodiment, decode unit 36 updates correspondence information between display device 22 and camera 32 using the color differencing codes identified by decode unit 36. One embodiment of the use of color differencing codes to update correspondence information will now be described with reference to
In one embodiment, encode unit 18 forms 2×2 overlapping spatial groupings of color differencing codes in portions 21 of encoded frame 20 using two color channels (e.g., X and Z chrominance channels). In
Decode unit 36 decodes each 2×2 grouping in subsets 99 and 100 from portions 35 of captured frames 34. Decode unit 36 determines the geometric mapping between display device 22 and camera 32 by identifying transitions and intersections between the 2×2 groupings to sub-pixel accuracy and corresponding the decoded sub-pixel coordinates in the camera space of camera 32 to those in the code arrangement defined in the display space of display device 22. The geometric mapping may be used directly as a vector field to map between the camera and projectors spaces or an appropriate parametric model may be applied to fit the mapping data.
In the above embodiments, image display system 10A may incorporate any suitable code redundancy and error correction algorithms to enhance the code robustness and validation process in system 10A. Decode unit 36 may, for robustness, evaluate the stability of the decoded information in space and/or time. For example, decode unit 36 may use an extended Kalman filter or probabilistic analysis to track the likelihood of the code's appearance.
In one embodiment, image display system 10A uses the color differencing codes for geometric calibration. To do so, image display system 10A updates correspondence information between display device 22, which displays encoded frames 20 as displayed images 24 on the display surface, and camera 32, which captures image 34 to include features 35 from display surface 26. Display device 22 may adjust geometric features of displayed images 24 using the correspondence information. The geometric features may include, for example, the size and shape of displayed images 24 on the display surface.
In this embodiment, a viewer of displayed images 24 does not see any fiducial marks in displayed images 24 because the “markings” used by image display system 10A to update the correspondence information are formed by the codes within displayed images 24. A simple pattern of codes may be used to create structured light patterns for correspondence estimation. Accordingly, the correspondence information of image display system 10A may be updated during normal operation without interrupting the viewing of displayed images 24.
In another embodiment, processing system 42 uses the color differencing codes to perform other suitable functions such as forensic marking or watermarking content to track or authenticate image data 12. In these embodiments, encode unit 18 may create temporal and/or spatial “signatures” based on the codes for more sophisticated applications including forensic marking and watermarking, etc. Moreover, encode unit 18 may also make “messages” consisting of a particular sequence of codes, etc.
In a further embodiment, image display system 10A uses the color differencing codes to allow a viewer to interact with system 10A. For example, camera 32, decode unit 36 and processing system 42 may be located in a remote control device that is configured to operate or otherwise function with display device 22.
In yet another embodiment, image display system 10A uses the color differencing codes to perform automatic keystone correction of display system 22.
In the embodiment of
In the embodiment of
Image frame buffer 14 includes memory for storing image data 12 for image frames 16. Thus, image frame buffer 14 constitutes a database of image frames 16. Examples of image frame buffer 14 include non-volatile memory (e.g., a hard disk drive or other persistent storage device) and may include volatile memory (e.g., random access memory (RAM)).
Encode unit 18 and decode unit 36 may be implemented in hardware, software, firmware, or any combination thereof. For example, encode unit 18 and/or decode unit 36 may include a microprocessor, programmable logic device, or state machine. Encode unit 18 and/or decode unit 36 may also form a program stored on one or more computer-readable mediums that is accessible and executable by a processing system (not shown). The term computer-readable medium as used herein is defined to include any kind of memory, volatile or non-volatile, such as floppy disks, hard disks, CD-ROMs, flash memory, read-only memory, and random access memory.
Encode unit 18, display device 22, decode unit 36, and/or camera 32 may each be integrated within image display system 10A as shown in
Display device 22 includes any suitable device or devices (e.g., a conventional projector, an LCD projector, a digital micromirror device (DMD) projector, a CRT display, an LCD display, or a DMD display) that are configured to display displayed image 24 onto or in a display surface. The surface 26 may be planar, non-planar, curved, or have any other suitable shape. In one embodiment, the display surface reflects light projected by display device 22 to form displayed images 24. In another embodiment, the display surface is translucent, and display system 10A is configured as a rear projection system.
In other embodiments, other display devices (not shown) may form images (not shown) on display surface 26 that overlap with displayed image 24. For example, display device 22 may be a portion of a hybrid display system with another type of display device that also displays images onto or in the display surface. In these embodiments, any images that would overlap with the display of portions 25 on the display surface may be configured not to interfere with the display of portions 25 on the display surface.
II. Color Differencing Codes in Images Frames for Display in Overlapping Images
Image display system 10B processes image data 102 and generates a corresponding displayed image 114 on a display surface 116. In the embodiment of
Image frame buffer 104 receives and buffers image data 102 to create image frames 106. Sub-frame generator 108 processes image frames 106 to define corresponding encoded image sub-frames 110(1)-110(R) (collectively referred to as sub-frames 110) where R is an integer that is greater than or equal to two. For each image frame 106, sub-frame generator 108 generates one sub-frame 110 for each projector 112 in one embodiment. Sub-frames 110-110(R) are received by projectors 112-112(R), respectively, and stored in image frame buffers 113-113(R) (collectively referred to as image frame buffers 113), respectively. Projectors 112(1)-112(R) project the sub-frames 110(1)-110(RM), respectively, onto display surface 116 in at least partially overlapping and spatially offset positions to produce displayed image 114 for viewing by a user.
In one embodiment, image display system 10B attempts to determine appropriate values for the sub-frames 110 so that displayed image 114 produced by the projected sub-frames 110 is close in appearance to how a corresponding high-resolution image (e.g., a corresponding image frame 106) from which the sub-frame or sub-frames 110 were derived would appear if displayed directly.
Also shown in
Display system 10B includes at least one camera 32 and calibration unit 124, which are used to automatically determine a geometric relationship between each projector 112 and the reference projector 118, as described in further detail below with reference to the embodiments of
Sub-frame generator 108 forms sub-frames 110 according to a geometric relationship between each of projectors 112 using camera-to-projector correspondence information 127 as described in additional detail below with reference to the embodiments of
In one embodiment, sub-frame generator 108 generates image sub-frames 110 with a resolution that matches the resolution of projectors 112, which is less than the resolution of image frames 106 in one embodiment. Sub-frames 110 each include a plurality of columns and a plurality of rows of individual pixels representing a subset of an image frame 106.
In one embodiment, display system 10B is configured to give the appearance to the human eye of high-resolution displayed images 114 by displaying overlapping and spatially shifted lower-resolution sub-frames 110. The projection of overlapping and spatially shifted sub-frames 110 may give the appearance of enhanced resolution (i.e., higher resolution than the sub-frames 110 themselves).
Sub-frames 110 projected onto display surface 116 may have perspective distortions, and the pixels may not appear as perfect squares with no variation in the offsets and overlaps from pixel to pixel, such as that shown in
Sub-frame 110(1) is spatially offset from sub-frame 110(2) by a predetermined distance. Similarly, sub-frame 110(3) is spatially offset from sub-frame 110(4) by a predetermined distance. In one illustrative embodiment, vertical distance 204 and horizontal distance 206 are each approximately one-half of one pixel.
The display of sub-frames 110(2), 110(3), and 110(4) are spatially shifted relative to the display of sub-frame 110(1) by vertical distance 204, horizontal distance 206, or a combination of vertical distance 204 and horizontal distance 206. As such, pixels 202 of sub-frames 110(1), 110(2), 110(3), and 110(4) at least partially overlap thereby producing the appearance of higher resolution pixels. Sub-frames 110(1), 110(2), 110(3), and 110(4) may be superimposed on one another (i.e., fully or substantially fully overlap), may be tiled (i.e., partially overlap at or near the edges), or may be a combination of superimposed and tiled. The overlapped sub-frames 110(1), 110(2), 110(3), and 110(4) also produce a brighter overall image than any of sub-frames 110(1), 110(2), 110(3), or 110(4) alone.
In the embodiment of
Projectors 112 display encoded sub-frames 110 so that displayed portions 117 that correspond to encoded portions 111 appear in displayed images 114. Camera 32 captures captured frames 123 from display surface 116 to include portions 125 that correspond to displayed portions 117. Calibration unit 124 includes decode unit 36. Decode unit 36 operates in image display system 10B as described above with reference to image display system 10A (
Image display system 10B may use the color differencing codes for forensic marking, watermarking content, or performing other suitable functions. Image display system 10B may also use the color differencing codes for geometric calibration for each projector 112 using non-overlapping display areas of sub-frames 110. In other embodiments, image display system 10B may use, the color differencing codes for individual encoded subframes 110.
Image display system 10B includes hardware, software, firmware, or a combination of these. In one embodiment, one or more components of image display system 10B are included in a computer, computer server, or other microprocessor-based system capable of performing a sequence of logic operations. In addition, processing can be distributed throughout the system with individual portions being implemented in separate system components, such as in a networked or multiple computing unit environments.
Sub-frame generator 108 and calibration unit 124 may be implemented in hardware, software, firmware, or any combination thereof and may be combined into a unitary processing system. For example, sub-frame generator 108 and calibration unit 124 may include a microprocessor, programmable logic device, or state machine. Sub-frame generator 108 and calibration unit 124 may also include software stored on one or more computer-readable mediums and executable by a processing system (not shown). The term computer-readable medium as used herein is defined to include any kind of memory, volatile or non-volatile, such as floppy disks, hard disks, CD-ROMs, flash memory, read-only memory, and random access memory. Encode unit 18 and decode unit 36 may be independent from, rather than integrated with, sub-frame generator 108 and calibration unit 124, respectively, in other embodiments.
Image frame buffer 104 includes memory for storing image data 102 for image frames 106. Thus, image frame buffer 104 constitutes a database of image frames 106. Image frame buffers 113 also include memory for storing any number of sub-frames 110. Examples of image frame buffers 104 and 113 include non-volatile memory (e.g., a hard disk drive or other persistent storage device) and may include volatile memory (e.g., random access memory (RAM)).
Display surface 116 may be planar, non-planar, curved, or have any other suitable shape. In one embodiment, display surface 116 reflects the light projected by projectors 112 to form displayed image 114. In another embodiment, display surface 116 is translucent, and display system 10B is configured as a rear projection system.
In other embodiments, other numbers of projectors 112 are used in system 10B and other numbers of sub-frames 110 are generated for each image frame 106.
In other embodiments, sub-frames 110(1), 110(2), 110(3), and 110(4) may be displayed at other spatial offsets relative to one another and the spatial offsets may vary over time.
In one embodiment, sub-frames 110 have a lower resolution than image frames 106. Thus, sub-frames 110 are also referred to herein as low-resolution images or sub-frames 110, and image frames 106 are also referred to herein as high-resolution images or frames 106. The terms low resolution and high resolution are used herein in a comparative fashion, and are not limited to any particular minimum or maximum number of pixels.
In one embodiment, display system 10B produces at least a partially superimposed projected output that takes advantage of natural pixel misregistration to provide a displayed image with a higher resolution than the individual sub-frames 110. In one embodiment, image formation due to multiple overlapped projectors 112 is modeled using a signal processing model. Optimal sub-frames 110 for each of the component projectors 112 are estimated by sub-frame generator 108 based on the model, such that the resulting image predicted by the signal processing model is as close as possible to the desired high-resolution image to be projected. In one embodiment described with reference to
In one embodiment, sub-frame generator 108 is configured to generate sub-frames 110 based on the maximization of a probability that, given a desired high resolution image, a simulated high-resolution image that is a function of the sub-frame values, is the same as the given, desired high-resolution image. If the generated sub-frames 110 are optimal, the simulated high-resolution image will be as close as possible to the desired high-resolution image. The generation of optimal sub-frames 110 based on a simulated high-resolution image and a desired high-resolution image is described in further detail below with reference to the embodiment of
One form of the embodiment of
Zk=HkDTYk Equation I
The low-resolution sub-frame pixel data (Yk) is expanded with the up-sampling matrix (DT) so that sub-frames 110 (Yk) can be represented on a high-resolution grid. The interpolating filter (Hk) fills in the missing pixel data produced by up-sampling. In the embodiment shown in
In one embodiment, the geometric mapping (Fk) is a floating-point mapping, but the destinations in the mapping are on an integer grid in image 304. Thus, it is possible for multiple pixels in image 302 to be mapped to the same pixel location in image 304, resulting in missing pixels in image 304. To avoid this situation, in one embodiment, during the forward mapping (Fk), the inverse mapping (Fk−1) is also utilized as indicated at 305 in
In another embodiment, the forward geometric mapping or warp (Fk) is implemented directly, and the inverse mapping (Fk−1) is not used. In one form of this embodiment, a scatter operation is performed to eliminate missing pixels. That is, when a pixel in image 302 is mapped to a floating point location in image 304, some of the image data for the pixel is essentially scattered to multiple pixels neighboring the floating point location in image 304. Thus, each pixel in image 304 may receive contributions from multiple pixels in image 302, and each pixel in image 304 is normalized based on the number of contributions it receives.
A superposition/summation of such warped images 304 from all of the component projectors 112 forms a hypothetical or simulated high-resolution image 306 ({circumflex over (X)}, also referred to as X-hat herein) in reference projector frame buffer 120, as represented in the following Equation II:
If the simulated high-resolution image 306 (X-hat) in reference projector frame buffer 120 is identical to a given (desired) high-resolution image 308 (X), the system of component low-resolution projectors 112 would be equivalent to a hypothetical high-resolution projector placed at the same location as hypothetical reference projector 118 and sharing its optical path. In one embodiment, the desired high-resolution images 308 are the high-resolution image frames 106 received by sub-frame generator 108.
In one embodiment, the deviation of the simulated high-resolution image 306 (X-hat) from the desired high-resolution image 308 (X) is modeled as shown in the following Equation III:
X={circumflex over (X)}=η Equation III
As shown in Equation III, the desired high-resolution image 308 (X) is defined as the simulated high-resolution image 306 (X-hat) plus η, which in one embodiment represents zero mean white Gaussian noise.
The solution for the optimal sub-frame data (Yk*) for sub-frames 110 is formulated as the optimization given in the following Equation IV:
Thus, as indicated by Equation IV, the goal of the optimization is to determine the sub-frame values (Yk) that maximize the probability of X-hat given X. Given a desired high-resolution image 308 (X) to be projected, sub-frame generator 108 determines the component sub-frames 110 that maximize the probability that the simulated high-resolution image 306 (X-hat) is the same as or matches the “true” high-resolution image 308 (X).
Using Bayes rule, the probability P(X-hat|X) in Equation IV can be written as shown in the following Equation V:
P({circumflex over (X)}|X)=P(X|{circumflex over (X)})P({circumflex over (X)})/P(X) Equation V
The term P(X) in Equation V is a known constant. If X-hat is given, then, referring to Equation III, X depends only on the noise term, η, which is Gaussian. Thus, the term P(X|X-hat) in Equation V will have a Gaussian form as shown in the following Equation VI:
To provide a solution that is robust to minor calibration errors and noise, a “smoothness” requirement is imposed on X-hat. In other words, it is assumed that good simulated images 306 have certain properties. The smoothness requirement according to one embodiment is expressed in terms of a desired Gaussian prior probability distribution for X-hat given by the following Equation VII:
In another embodiment, the smoothness requirement is based on a prior Laplacian model, and is expressed in terms of a probability distribution for X-hat given by the following Equation VIII:
The following discussion assumes that the probability distribution given in Equation VII, rather than Equation VIII, is being used. As will be understood by persons of ordinary skill in the art, a similar procedure would be followed if Equation VIII were used. Inserting the probability distributions from Equations VI and VII into Equation V, and inserting the result into Equation IV, results in a maximization problem involving the product of two probability distributions (note that the probability P(X) is a known constant and goes away in the calculation). By taking the negative logarithm, the exponents go away, the product of the two probability distributions becomes a sum of two probability distributions, and the maximization problem given in Equation IV is transformed into a function minimization problem, as shown in the following Equation IX:
The function minimization problem given in Equation IX is solved by substituting the definition of X-hat from Equation II into Equation IX and taking the derivative with respect to Yk, which results in an iterative algorithm given by the following Equation X:
Yk(n+1)=Yk(n)−Θ{DHkTFkT└({circumflex over (X)}(n)−X)+β2∇2{circumflex over (X)}(n)┘} Equation X
Equation X may be intuitively understood as an iterative process of computing an error in the hypothetical reference projector coordinate system and projecting it back onto the sub-frame data. In one embodiment, sub-frame generator 108 is configured to generate sub-frames 110 in real-time using Equation X. The generated sub-frames 110 are optimal in one embodiment because they maximize the probability that the simulated high-resolution image 306 (X-hat) is the same as the desired high-resolution image 308 (A), and they minimize the error between the simulated high-resolution image 306 and the desired high-resolution image 308. Equation X can be implemented very efficiently with conventional image processing operations (e.g., transformations, down-sampling, and filtering). The iterative algorithm given by Equation X converges rapidly in a few iterations and is very efficient in terms of memory and computation (e.g., a single iteration uses two rows in memory; and multiple iterations may also be rolled into a single step). The iterative algorithm given by Equation X is suitable for real-time implementation, and may be used to generate optimal sub-frames 110 at video rates, for example.
To begin the iterative algorithm defined in Equation X, an initial guess, Yk(0), for sub-frames 110 is determined. In one embodiment, the initial guess for sub-frames 110 is determined by texture mapping the desired high-resolution frame 308 onto sub-frames 110. In one embodiment, the initial guess is determined from the following Equation XI:
Yk(0)=DBkFkTX Equation XI
Thus, as indicated by Equation XI, the initial guess (Yk(0)) is determined by performing a geometric transformation (FkT) on the desired high-resolution frame 308 (X), and filtering (Bk) and down-sampling (D) the result. The particular combination of neighboring pixels from the desired high-resolution frame 308 that are used in generating the initial guess (Yk(0)) will depend on the selected filter kernel for the interpolation filter (Bk).
In another embodiment, the initial guess, Yk(0), for sub-frames 110 is determined from the following Equation XII
Yk(0)=DFkTX Equation XII
Equation XII is the same as Equation XI, except that the interpolation filter (Bk) is not used.
Several techniques are available to determine the geometric mapping (Fk) between each projector 112 and hypothetical reference projector 118, including manually establishing the mappings, using structured light coding, or using camera 32 and calibration unit 124 to automatically determine the mappings. In one embodiment, if camera 32 and calibration unit 124 are used, the geometric mappings between each projector 112 and camera 32 are determined by calibration unit 124. These projector-to-camera mappings may be denoted by Tk, where k is an index for identifying projectors 112. Based on the projector-to-camera mappings (Tk), the geometric mappings (Fk) between each projector 112 and hypothetical reference projector 118 are determined by calibration unit 124, and provided to sub-frame generator 108. For example, in a display system 10B with two projectors 112(1) and 112(2), assuming the first projector 112(1) is hypothetical reference projector 118, the geometric mapping of the second projector 112(2) to the first (reference) projector 112(1) can be determined as shown in the following Equation XIII:
F2=T2T1−1 Equation XIII
Calibration unit 124 continually or periodically determines (e.g., once per frame 106) the geometric mappings (Fk), stores the geometric mappings (Fk) as camera-to-projector correspondence information 127, and provides updated values for the mappings to sub-frame generator 108.
One embodiment provides an image display system 10B with multiple overlapped low-resolution projectors 112 coupled with an efficient real-time (e.g., video rates) image processing algorithm for generating sub-frames 110. In one embodiment, multiple low-resolution, low-cost projectors 112 are used to produce high resolution images at high lumen levels, but at lower cost than existing high-resolution projection systems, such as a single, high-resolution, high-output projector. One embodiment provides a scalable image display system 10B that can provide virtually any desired resolution, brightness, and color, by adding any desired number of component projectors 112 to the system 10B.
In some existing display systems, multiple low-resolution images are displayed with temporal and sub-pixel spatial offsets to enhance resolution. There are some important differences between these existing systems and embodiments described herein. For example, in one embodiment, there is no need for circuitry to offset the projected sub-frames 110 temporally. In one embodiment, sub-frames 110 from the component projectors 112 are projected “in-sync”. As another example, unlike some existing systems where all of the sub-frames go through the same optics and the shifts between sub-frames are all simple translational shifts, in one embodiment, sub-frames 110 are projected through the different optics of the multiple individual projectors 112. In one embodiment, the signal processing model that is used to generate optimal sub-frames 110 takes into account relative geometric distortion among the component sub-frames 110, and is robust to minor calibration errors and noise.
It can be difficult to accurately align projectors into a desired configuration. In one embodiment, regardless of what the particular projector configuration is, even if it is not an optimal alignment, sub-frame generator 108 determines and generates optimal sub-frames 110 for that particular configuration.
Algorithms that seek to enhance resolution by offsetting multiple projection elements have been previously proposed. These methods may assume simple shift offsets between projectors, use frequency domain analyses, and rely on heuristic methods to compute component sub-frames. In contrast, one form of the embodiments described herein utilize an optimal real-time sub-frame generation algorithm that explicitly accounts for arbitrary relative geometric distortion (not limited to homographies) between the component projectors 112, including distortions that occur due to a display surface that is non-planar or has surface non-uniformities. One embodiment generates sub-frames 110 based on a geometric relationship between a hypothetical high-resolution hypothetical reference projector at any arbitrary location and each of the actual low-resolution projectors 112, which may also be positioned at any arbitrary location.
In one embodiment, image display system 10B is configured to project images that have a three-dimensional (3D) appearance. In 3D image display systems, two images, each with a different polarization, are simultaneously projected by two different projectors. One image corresponds to the left eye, and the other image corresponds to the right eye. Conventional 3D image display systems typically suffer from a lack of brightness. In contrast, with one embodiment, a first plurality of the projectors 112 may be used to produce any desired brightness for the first image (e.g., left eye image), and a second plurality of the projectors 112 may be used to produce any desired brightness for the second image (e.g., right eye image). In another embodiment, image display system 10B may be combined or used with other display systems or display techniques, such as tiled displays.
Naïve overlapped projection of different colored sub-frames 110 by different projectors 112 can lead to significant color artifacts at the edges due to misregistration among the colors. In the embodiments of
Zik=HiDiTYik Equation XIV
The low-resolution sub-frame pixel data (Yik) is expanded with the up-sampling matrix (DiT) so that the sub-frames 110 (Yik) can be represented on a high-resolution grid. The interpolating filter (Hi) fills in the missing pixel data produced by up-sampling. In the embodiment shown in
In one embodiment, the geometric mapping (Fik) is a floating-point mapping, but the destinations in the mapping are on an integer grid in image 404. Thus, it is possible for multiple pixels in image 402 to be mapped to the same pixel location in image 404, resulting in missing pixels in image 404. To avoid this situation, in one embodiment, during the forward mapping (Fik), the inverse mapping (Fik−1) is also utilized as indicated at 405 in
In another embodiment, the forward geometric mapping or warp (Fk) is implemented directly, and the inverse mapping (Fk−1) is not used. In one form of this embodiment, a scatter operation is performed to eliminate missing pixels. That is, when a pixel in image 402 is mapped to a floating point location in image 404, some of the image data for the pixel is essentially scattered to multiple pixels neighboring the floating point location in image 404. Thus, each pixel in image 404 may receive contributions from multiple pixels in image 402, and each pixel in image 404 is normalized based on the number of contributions it receives.
A superposition/summation of such warped images 404 from all of the component projectors 112 in a given color plane forms a hypothetical or simulated high-resolution image (X-hati) for that color plane in the reference projector frame buffer 120, as represented in the following Equation XV:
A hypothetical or simulated image 406 (X-hat) is represented by the following Equation XVI:
{circumflex over (X)}=[{circumflex over (X)}1{circumflex over (X)}2 . . . {circumflex over (X)}N]T Equation XVI
If the simulated high-resolution image 406 (X-hat) in the reference projector frame buffer 120 is identical to a given (desired) high-resolution image 408 (X), the system of component low-resolution projectors 112 would be equivalent to a hypothetical high-resolution projector placed at the same location as the reference projector 118 and sharing its optical path. In one embodiment, the desired high-resolution images 408 are the high-resolution image frames 106 (
In one embodiment, the deviation of the simulated high-resolution image 406 (X-hat) from the desired high-resolution image 408 (Y) is modeled as shown in the following Equation XVII:
X={circumflex over (X)}+η Equation XVII
As shown in Equation XVII, the desired high-resolution image 408 (X) is defined as the simulated high-resolution image 406 (X-hat) plus η, which in one embodiment represents zero mean white Gaussian noise.
The solution for the optimal sub-frame data (Yik*) for the sub-frames 110 is formulated as the optimization given in the following Equation XVIII:
Thus, as indicated by Equation XVIII, the goal of the optimization is to determine the sub-frame values (Yik) that maximize the probability of X-hat given X. Given a desired high-resolution image 408 (X) to be projected, sub-frame generator 108 (
Using Bayes rule, the probability P(X-hat|X) in Equation XVIII can be written as shown in the following Equation XIX:
The term P(X) in Equation XIX is a known constant. If X-hat is given, then, referring to Equation XVII, X depends only on the noise term, η, which is Gaussian. Thus, the term P(X|X-hat) in Equation XIX will have a Gaussian form as shown in the following Equation XX:
To provide a solution that is robust to minor calibration errors and noise, a “smoothness” requirement is imposed on X-hat. In other words, it is assumed that good simulated images 406 have certain properties. For example, for most good color images, the luminance and chrominance derivatives are related by a certain value. In one embodiment, a smoothness requirement is imposed on the luminance and chrominance of the X-hat image based on a “Hel-Or” color prior model, which is a conventional color model known to those of ordinary skill in the art. The smoothness requirement according to one embodiment is expressed in terms of a desired probability distribution for X-hat given by the following Equation XXI:
In another embodiment, the smoothness requirement is based on a prior Laplacian model, and is expressed in terms of a probability distribution for X-hat given by the following Equation XXII:
The following discussion assumes that the probability distribution given in Equation XXI, rather than Equation XXII, is being used. As will be understood by persons of ordinary skill in the art, a similar procedure would be followed if Equation XXII were used. Inserting the probability distributions from Equations VII and VIII into Equation XIX, and inserting the result into Equation XVIII, results in a maximization problem involving the product of two probability distributions (note that the probability P(X) is a known constant and goes away in the calculation). By taking the negative logarithm, the exponents go away, the product of the two probability distributions becomes a sum of two probability distributions, and the maximization problem given in Equation XVIII is transformed into a function minimization problem, as shown in the following Equation XXIII:
The function minimization problem given in Equation XXIII is solved by substituting the definition of X-hati from Equation XV into Equation XXIII and taking the derivative with respect to Yik, which results in an iterative algorithm given by the following Equation XXIV:
Equation XXIV may be intuitively understood as an iterative process of computing an error in the reference projector 118 coordinate system and projecting it back onto the sub-frame data. In one embodiment, sub-frame generator 108 (
To begin the iterative algorithm defined in Equation XXIV, an initial guess, Yik(0), for the sub-frames 110 is determined. In one embodiment, the initial guess for the sub-frames 110 is determined by texture mapping the desired high-resolution frame 408 onto the sub-frames 110. In one embodiment, the initial guess is determined from the following Equation XXV:
Yik(0)=DiBiFikTXi Equation XXV
Thus, as indicated by Equation XXV, the initial guess (Yik(0)) is determined by performing a geometric transformation (FikT) on the ith color plane of the desired high-resolution frame 408 (Xi), and filtering (Bi) and down-sampling (Di) the result. The particular combination of neighboring pixels from the desired high-resolution frame 408 that are used in generating the initial guess (Yik(0)) will depend on the selected filter kernel for the interpolation filter (Bi).
In another embodiment, the initial guess, Yik(0), for the sub-frames 110 is determined from the following Equation XXVI:
Yik(0)=DiFikTXi Equation XXVI
Equation XXVI is the same as Equation XXV, except that the interpolation filter (Bk) is not used.
Several techniques are available to determine the geometric mapping (Fik) between each projector 112 and the reference projector 118, including manually establishing the mappings, or using camera 32 and calibration unit 124 (
F2=T2T1−1 Equation XXVII
In one embodiment, the geometric mappings (Fik) are determined once by calibration unit 124, and provided to sub-frame generator 108. In another embodiment, calibration unit 124 continually determines (e.g., once per frame 106) the geometric mappings (Fik), and continually provides updated values for the mappings to sub-frame generator 108.
One form of the single color projector embodiments provides an image display system 10B with multiple overlapped low-resolution projectors 112 coupled with an efficient real-time (e.g., video rates) image processing algorithm for generating sub-frames 110. In one embodiment, multiple low-resolution, low-cost projectors 112 are used to produce high resolution images at high lumen levels, but at lower cost than existing high-resolution projection systems, such as a single, high-resolution, high-output projector. One embodiment provides a scalable image display system 10B that can provide virtually any desired resolution, brightness, and color, by adding any desired number of component projectors 112 to image display system 10B.
In some existing display systems, multiple low-resolution images are displayed with temporal and sub-pixel spatial offsets to enhance resolution. There are some important differences between these existing systems and the single color projector embodiments. For example, in one embodiment, there is no need for circuitry to offset the projected sub-frames 110 temporally. In one embodiment, the sub-frames 110 from the component projectors 112 are projected “in-sync”. As another example, unlike some existing systems where all of the sub-frames go through the same optics and the shifts between sub-frames are all simple translational shifts, in one embodiment, the sub-frames 110 are projected through the different optics of the multiple individual projectors 112. In one form of the single color projector embodiments, the signal processing model that is used to generate optimal sub-frames 110 takes into account relative geometric distortion among the component sub-frames 110, and is robust to minor calibration errors and noise.
It can be difficult to accurately align projectors into a desired configuration. In one embodiment of the single color projector embodiments, regardless of what the particular projector configuration is, even if it is not an optimal alignment, sub-frame generator 108 determines and generates optimal sub-frames 110 for that particular configuration.
Algorithms that seek to enhance resolution by offsetting multiple projection elements have been previously proposed. These methods assume simple shift offsets between projectors, use frequency domain analyses, and rely on heuristic methods to compute component sub-frames. In contrast, one embodiment described herein utilizes an optimal real-time sub-frame generation algorithm that explicitly accounts for arbitrary relative geometric distortion (not limited to homographies) between the component projectors 112, including distortions that occur due to a display surface 116 that is non-planar or has surface non-uniformities. One form of the single color projector embodiments generates sub-frames 110 based on a geometric relationship between a hypothetical high-resolution reference projector 118 at any arbitrary location and each of the actual low-resolution projectors 112, which may also be positioned at any arbitrary location.
One form of the single color projector embodiments provides an image display system 10B with multiple overlapped low-resolution projectors 112, with each projector 112 projecting a different colorant to compose a full color high-resolution image 114 on display surface 116 with minimal color artifacts due to the overlapped projection. By imposing a color-prior model via a Bayesian approach as is done in one embodiment, the generated solution for determining sub-frame values minimizes color aliasing artifacts and is robust to small modeling errors.
Using multiple off the shelf projectors 112 in image display system 10B allows for high resolution. However, if the projectors 112 include a color wheel, which is common in existing projectors, image display system 10B may suffer from light loss, sequential color artifacts, poor color fidelity, reduced bit-depth, and a significant tradeoff in bit depth to add new colors. One embodiment eliminates the need for a color wheel, and uses in its place, a different color filter for each projector 112 as shown in
Image display system 10B may be very efficient from a processing perspective since, in one embodiment, each projector 112 only processes one color plane. For example, each projector 112 reads and renders only one-fourth (for RGBY) of the full color data in one embodiment.
In one embodiment, image display system 10B is configured to project images that have a three-dimensional (3D) appearance. In 3D image display systems, two images, each with a different polarization, are simultaneously projected by two different projectors. One image corresponds to the left eye, and the other image corresponds to the right eye. Conventional 3D image display systems typically suffer from a lack of brightness. In contrast, with one embodiment, a first plurality of the projectors 112 may be used to produce any desired brightness for the first image (e.g., left eye image), and a second plurality of the projectors 112 may be used to produce any desired brightness for the second image (e.g., right eye image). In another embodiment, image display system 10B may be combined or used with other display systems or display techniques, such as tiled displays.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4373784 | Nonomura et al. | Feb 1983 | A |
4662746 | Hornbeck et al. | May 1987 | A |
4747146 | Nishikawa et al. | May 1988 | A |
4807031 | Broughton et al. | Feb 1989 | A |
4811003 | Strathman et al. | Mar 1989 | A |
4956619 | Hornbeck | Sep 1990 | A |
5061049 | Hornbeck | Oct 1991 | A |
5083857 | Hornbeck | Jan 1992 | A |
5146356 | Carlson | Sep 1992 | A |
5309241 | Hoagland | May 1994 | A |
5317409 | Macocs | May 1994 | A |
5386253 | Fielding | Jan 1995 | A |
5402184 | O'Grady et al. | Mar 1995 | A |
5490009 | Venkateswar et al. | Feb 1996 | A |
5557353 | Stahl | Sep 1996 | A |
5689283 | Shirochi | Nov 1997 | A |
5751379 | Markandey et al. | May 1998 | A |
5842762 | Clarke | Dec 1998 | A |
5870136 | Fuchs et al. | Feb 1999 | A |
5897191 | Clarke | Apr 1999 | A |
5912773 | Barnettei et al. | Jun 1999 | A |
5920368 | Eriksson | Jul 1999 | A |
5953148 | Moseley et al. | Sep 1999 | A |
5978518 | Oliyide et al. | Nov 1999 | A |
6025951 | Swart et al. | Feb 2000 | A |
6067143 | Tomita | May 2000 | A |
6104375 | Lam | Aug 2000 | A |
6118584 | Van Berkel et al. | Sep 2000 | A |
6141039 | Poetsch | Oct 2000 | A |
6184969 | Fergason | Feb 2001 | B1 |
6219017 | Shimada et al. | Apr 2001 | B1 |
6239783 | Hill et al. | May 2001 | B1 |
6243055 | Fergason | Jun 2001 | B1 |
6313888 | Tabata | Nov 2001 | B1 |
6317171 | Dewald | Nov 2001 | B1 |
6384816 | Tabata | May 2002 | B1 |
6390050 | Feikus | May 2002 | B2 |
6393145 | Betrisey et al. | May 2002 | B2 |
6522356 | Watanabe | Feb 2003 | B1 |
6545685 | Dorbie | Apr 2003 | B1 |
6570623 | Li et al. | May 2003 | B1 |
6657603 | Demetrescu et al. | Dec 2003 | B1 |
6823455 | Macy et al. | Nov 2004 | B1 |
6877857 | Perlin | Apr 2005 | B2 |
6963319 | Pate et al. | Nov 2005 | B2 |
7114071 | Chmounk et al. | Sep 2006 | B1 |
20030020809 | Gibbon et al. | Jan 2003 | A1 |
20030076325 | Thrasher | Apr 2003 | A1 |
20030090597 | Katoh et al. | May 2003 | A1 |
20030107712 | Perlin | Jun 2003 | A1 |
20040136528 | Muratani | Jul 2004 | A1 |
20040239885 | Jaynes et al. | Dec 2004 | A1 |
20050287449 | Matthys et al. | Dec 2005 | A1 |
20060012598 | Tsao | Jan 2006 | A1 |
20060029252 | So | Feb 2006 | A1 |
20060187299 | Miyazawa | Aug 2006 | A1 |
20070153025 | Mitchell et al. | Jul 2007 | A1 |
20070165024 | Tsao | Jul 2007 | A1 |
20070285351 | Willis | Dec 2007 | A1 |
Number | Date | Country |
---|---|---|
1001306 | May 2000 | EP |
WO 2004109380 | Dec 2004 | WO |
Entry |
---|
Raskar et al., “The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays”, Computer Graphics Proceedings, 1998, pp. 1-10. |
Cotting et al., “Adaptive Instant Displays: Continuously Calibrated Projections Using Per-Pixel Light Control”, Eurographics 2005, vol. 24, No. 3. |
Grundhofer et al., “Coded Projection and Illumination for Television Studios”, Eurographics 2007, vol. 26, No. 3. |
Cotting et al., “Embedding Imperceptible Patterns into Projected Images for Simultaneous Acquisition and Display”, IEEE Computer Society, 0-7695-21912-6/04 (2004). |
Elliott, et el. “Color Subpixel Rendering Projectors and Flat Panel Displays”, Feb. 27-Mar. 1, 2003, SMPTE Advanced Motion Imaging Conference, pp. 1.4. |
Chen, Diana C. “Display resolution enhancement with optical scanners.” Applied Optics 40, No. 5 (2001): 636-643. |
Yasuda, et al., “FLC wobbling for high-resolution projectors,” Journal of the Society for Information Display 5, No. 3 (1997) 299-305. |
Kelley, Dh, “Motion end Vision—ll. Stabilized Spatio-Temporal Threshold Surface”, Journal of the Optical Society of America, vol. 60, No. 10, Oct. 1979. |
Tokita, et al,. “P-108: FLC Resolution-Enhancing Device for Projection Displays”, SID 02 Digest 2002, pp. 638-641. |
Jaynes, et al., “Super-resolution composition in multi-projector displays,” 10/2003, IEEE Int'l Workshop on Projector-Camera Systems. vol. 8. |
Chen, et al., “Visual Resolution Limits for Color Matrix Displays”, 1992, Displays, vol. 13, No. 4, pp. 221-226. |
Number | Date | Country | |
---|---|---|---|
20080267516 A1 | Oct 2008 | US |