Owners, creators, and distributors of visual and audio works are generally interested in preventing the works from being reproduced without authorization. These works are often stored in a digital format which may be relatively easy to copy. Digital Rights Management (DRM) or other encryption technology may be used to prevent users from being able to reproduce digital content. DRM technology generally does not alter the plaintext digital content. Accordingly, if the DRM technology is thwarted, however, users may be able to reproduce digital content. It would be desirable to be able to prevent users from being able to reproduce digital content.
According to one embodiment, a rendering engine including a first component configured to render warped content that is generated remotely from the rendering engine by applying a warping transformation to stored content according to warping information and a second component configured to inversely warp the rendered warped content according to inverse warping information that corresponds to the warping information to form a reproduction of the stored content is provided. The second component is configured to inversely warp the rendered warped content subsequent to or contemporaneous with the warped content being rendered by the first component.
In the following Detailed Description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. In this regard, directional terminology, such as “top,” “bottom,” “front,” “back,” etc., may be used with reference to the orientation of the Figure(s) being described. Because components of embodiments of the present invention can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. The following Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
As described herein, a system and method for providing security to visual and/or audio works is provided. The system and method contemplate warping the content of a visual and/or audio work with a defined distortion pattern and providing the only warped content to a rendering engine with an inverse warping component. Inverse warping information may also be provided to the rendering engine to configure the inverse warping component in one or more embodiments. The inverse warping component inversely warps the warped content as part of the rendering process to reproduce the original content without visual or acoustic distortion from the defined distortion pattern. If a rendering engine without an inverse warping component attempts to render the warped content, the defined distortion pattern is present in the reproduction.
Referring to
Processing system 10 also receives warping information 16 as indicated by an arrow 18. Warping information 16 is configured to be usable by processing system 10 to warp stored content 12 with spatial, visual, temporal, or amplitude distortion to generate warped content 20. Warping information 16 corresponds to an inverse warping component 27 (shown in
Stored content 12 and warping information 16 may be received or accessed by processing system 10 from any suitable storage device or devices (not shown). The storage devices may be portable or non-portable and may be directly connected to processing system 10, connected to processing system 10 through any number of intermediate devices (not shown), or may be remotely located from processing system 10 across one or more local, regional, or global communication networks such as the Internet (not shown).
Processing system 10 generates warped content 20 from stored content 12 using warping information 16 as indicated by an arrow 21. Processing system 10 applies a warping transformation to stored content 12 according to warping information 16. Processing system 10 generates warped content 20 such that warped content 20 may be used by rendering engine 22 to reproduce stored content 12 without distortion only by using inverse warping component 27. As described in additional detail below with reference to
Processing system 10 uses warping information 16 to visually and/or acoustically warp stored content 12 to generate warped content 20. As a result, warped content 20 includes a defined visual and/or acoustic distortion pattern when reproduced using a rendering engine without an inverse warping component that corresponds to warping information 16. The defined distortion pattern results in a degraded or lower quality reproduction where a viewer or listener can see any visual distortion and hear any acoustic distortion. Warping information 16 specifies one or more warping parameters (also referred to as degrees of freedom) that may be used by processing system to include the defined distortion pattern in warped content 20. The warping parameters cause the defined distortion pattern to occur spatially and/or temporally in the reproduction.
For visual stored content 12, processing system 10 may warp stored content 12 by configuring warped content 20 using warping information 16 such that the display of warped content 20, without inverse warping, appears with spatial distortions (e.g., stretched, compressed, or otherwise deformed displayed images). Processing system 10 may also warp visual stored content 12 by configuring warped content 20 using warping information 16 such that the display of warped content 20, without inverse warping, appears with color or light amplitude distortions (e.g., overly bright and/or overly dark regions in displayed image).
For audio stored content 12, processing system 10 may warp stored content 12 by configuring warped content 20 using warping information 16 such that the generation of audio from warped content 20, without inverse warping, includes temporal distortion (e.g., compressed, expanded, or otherwise time altered audio). Processing system 10 may also warp audio stored content 12 by configuring warped content 20 using warping information 16 such that the generation of audio from warped content 20, without inverse warping, includes sound amplitude distortion (e.g., overly loud or soft periods in the audio).
As noted above, stored content 12 may be all or a part of a visual or audio work. Processing system 10 may generate warped content 20 using different warping parameters from warping information 16 for different parts of stored content 12 (e.g., a first warping parameter for a first portion of stored content 12 (e.g., the first half of a movie) and a second warping parameter for a second portion of stored content 12 (e.g., the second half of a movie)). Processing system 10 may also generate warped content 20 using different warping information 16 for different stored content 12 (e.g., first warping information 16 for first stored content 12 (e.g., a first movie) and second warping information 16 for second stored content 12 (e.g., a second movie)).
In one embodiment, processing system 10 receives inverse warping information 23 as indicated by an arrow 25A and generates warping information 16 from inverse warping information 23 prior to generating warped content 20. Inverse warping information 23 may directly indicate the configuration of inverse warping component 27 or may indirectly indicate the configuration of inverse warping component 27 using a model or serial number of rendering engine 22 or inverse warping component 27. Processing system 10 generates warping information 16 using the configuration described by inverse warping information 23 in this embodiment. In one embodiment, an owner or user of rendering engine 22 may provide inverse warping information 23 to processing system 10 to describe a configuration of inverse warping component 27. In another embodiment, a manufacturer of rendering engine 22 or inverse warping component 27 provides inverse warping information 23 to processing system 10 to describe a configuration of inverse warping component 27.
Inverse warping information 23 may be accessed by processing system 10 from any suitable storage device or devices (not shown). The storage devices may be portable or non-portable and may be directly connected to processing system 10, connected to processing system 10 through any number of intermediate devices (not shown), or may be remotely located from processing system 10 across one or more local, regional, or global communication networks such as the Internet (not shown).
In another embodiment, processing system 10 generates inverse warping information 23 from warping information 16 as indicated by an arrow 25B and provides inverse warping information 23 to rendering engine 22. Because warping information 16 defines the warping parameters used to generate warped content 20, processing system 10 may also generate inverse warping information 23 to indicate the configuration of inverse warping component 27 in rendering engine that will allow warped content 20 to be reproduced without distortion. As described in additional detail below with reference to
Warped content 20 and, optionally, inverse warping information 23 are provided to rendering engine 22 (shown in
Processing system 10 may include any suitable combination of hardware and software components. For example, processing system 10 may include one or more software components configured to be executed by the processing system 10. Any software components may be stored in any suitable portable or non-portable media that is accessible to processing system 10 either from within processing system 10 or from a storage device connected directly or indirectly (e.g., across a network) to processing system 10.
Rendering engine 22 receives warped content 20 from any suitable storage device or devices (not shown) as indicated by an arrow 26. The storage devices may be portable or non-portable and may be directly connected to processing system 10, connected to processing system 10 through any number of intermediate devices (not shown), or may be remotely located from processing system 10 across one or more local, regional, or global communication networks such as the Internet (not shown).
Rendering engine 22 includes a rendering component 24 and inverse warping component 27. Rendering component 24 renders warped content 20 into rendered warped content, and inverse warping component 27 inversely warps the rendered warped content to allow rendering engine 22 to form unwarped reproduction 30 of stored content 12 as indicated by an arrow 28. Inverse warping component 27 performs the inverse warping subsequent to or contemporaneous with rendering component 24 rendering warped content 20.
Where warped content 20 includes visual information, rendering component 24 renders warped content 20 into rendered warped content that is suitable for display, and inverse warping component 27 inversely warps the rendered warped content so that rendering engine 22 displays unwarped reproduction 30 onto a display surface (not shown in
As noted above, rendering engine 22 does not receive or otherwise access stored content 12. In addition, rendering engine 22 does not recreate or attempt to recreate stored content 12 as part of the process of producing unwarped reproduction 30 from warped content 20. Accordingly, unwarped stored content 12 is not able to be accessed or copied from rendering engine 22.
The generation and use of warped content 20 results in form of analog cryptographic protection where the actual content of stored content 12 is encrypted in warped content 20 and is decrypted using inverse warping component 27 to produce unwarped reproduction 30. Accordingly, even if other forms of security such as digital rights management that are applied to warped content are compromised, warped content 20 may not be reproduced without distortion without using inverse warping component 27.
With the embodiment of rendering engine 22A shown in
In one embodiment, inverse warping component 27A also includes a control unit 46 and receives inverse warping information 23A. Control unit 46 configures non-uniform display surface 42 as specified by inverse warping information 23A in this embodiment. To do so, control unit 46 causes any number of retractable sticks 44 to be adjusted. Each retractable stick 44 connects to a point or region of display surface 42 and causes the point or region to be moved relative to the display system. By independently adjusting each retractable stick 44, control unit 46 causes display surface 42 to form an overall shape that inversely warps the rendered warped content from the display system to display unwarped reproduction 30A. Control unit 46 may dynamically reconfigure non-uniform display surface 42 at any time by adjusting retractable sticks 44 according to different inverse warping information 23A. Retractable sticks 44 may be replaced with any other suitable mechanical devices for adjusting display surface 42 in other embodiments.
In another embodiment, non-uniform display surface 42 is statically configured. In this embodiment, inverse warping component 27A does not include control unit 46 and does not receive inverse warping information 23A. Inverse warping information 23A is inherently contained in inverse warping component 27A in this embodiment. Inverse warping information 23A that specifies the static configuration of non-uniform display surface 42 may be provided to processing system 10 (shown in
With the embodiment of rendering engine 22B shown in
Inverse warping information (not shown) that specifies the configuration of the inverse warping lens may be provided to processing system 10 (shown in
In the embodiments of
With the embodiment of rendering engine 22C shown in
In one embodiment, inverse warping component 27C also includes a control unit 72 and receives inverse warping information 23B. Control unit 72 configures the color, reflective, or refractive properties of various points or regions of display surface 70 as specified by inverse warping information 23B in this embodiment. Control unit 72 may dynamically reconfigure display surface 70 at any time by adjusting the reflective or refractive properties of display surface 70 according to different inverse warping information 23B.
In another embodiment, the reflective or refractive properties display surface 70 are statically configured. In this embodiment, inverse warping component 27C does not include control unit 72 and does not receive inverse warping information 23B. Inverse warping information 23B is inherently contained in inverse warping component 27C in this embodiment. Inverse warping information 23B that specifies the static configuration of display surface 70 may be provided to processing system 10 (shown in
With the embodiment of rendering engine 22D shown in
Inverse warping information (not shown) that specifies the configuration of the inverse warping light may be provided to processing system 10 (shown in
With the embodiment of rendering engine 22E shown in
In one embodiment, the inverse warping unit receives inverse warping information 23C. The inverse warping unit inversely warps the audio signal as specified by inverse warping information 23C in this embodiment.
In another embodiment, the inverse warping unit may be statically formed as part of speakers or headphones 99. Inverse warping information 23C is inherently contained in the inverse warping unit in this embodiment. Inverse warping information 23C that specifies the static configuration of the inverse warping unit may be provided to processing system 10 (shown in
In the embodiment of
A reproduction of warped content 20E-1 illustrates temporal warping of stored content 12B. Warped content 20E-1 includes defined temporal distortion patterns between times t1 and t2 and between times t3 and t4 when compared to stored content 12B. The temporal distortion between times t1 and t2 is formed by compressing stored content 12B, and the temporal distortion between times t3 and t4 is formed by expanding stored content 12B. To produce unwarped reproduction 30E from warped content 20E-1 as shown in
A reproduction of warped content 20E-2 illustrates sound amplitude warping of stored content 12B. Warped content 20E-2 includes defined sound amplitude distortion patterns between times t1 and t2 and between times t3 and t4 when compared to stored content 12B. The sound amplitude distortion between times t1 and t2 is formed by enhancing the amplitudes of stored content 12B, and the sound amplitude distortion between times t3 and t4 is formed by reducing the amplitudes of stored content 12B. To produce unwarped reproduction 30E from warped content 20E-2 as shown in
Unwarped reproduction 30E reproduces stored content 12B as shown in
Depending on the embodiment, one or more components of rendering engine 22F form an inverse warping component of rendering engine 22F.
In one embodiment of rendering engine 22F, display surface 116 includes inverse warping component 27A (shown in
In another embodiment of rendering engine 22F, each projector 112 includes an inverse warping lens 27B (shown in
In a further embodiment of rendering engine 22F, display surface 116 includes inverse warping component 27C (shown in
In yet another embodiment of rendering engine 22F, rendering engine 22F includes inverse warping component 27D (shown in
Referring to
Image frame buffer 104 includes memory for storing warped content 20F for one or more image frames 106. Thus, image frame buffer 104 constitutes a database of one or more image frames 106. Image frame buffers 113 also include memory for storing sub-frames 110. Examples of image frame buffers 104 and 113 include non-volatile memory (e.g., a hard disk drive or other persistent storage device) and may include volatile memory (e.g., random access memory (RAM)).
Sub-frame generator 108 receives and processes image frames 106 to define a plurality of image sub-frames 110. Sub-frame generator 108 generates sub-frames 110 based on image data in image frames 106. In one embodiment, sub-frame generator 108 generates image sub-frames 110 with a resolution that matches the resolution of projectors 112, which is less than the resolution of image frames 106 in one embodiment. Sub-frames 110 each include a plurality of columns and a plurality of rows of individual pixels representing a subset of an image frame 106. Sub-frame generator 108 may generates sub-frames 110 to fully or partially overlap in any suitable tiled and/or superimposed arrangement on display surface 116.
Projectors 112 receive image sub-frames 110 from sub-frame generator 108 and, in one embodiment, simultaneously project the image sub-frames 110 onto display surface 116 at overlapping and spatially offset positions to produce unwarped reproduction 30F. In one embodiment, rendering engine 22F is configured to give the appearance to the human eye of high-resolution unwarped reproductions 30F by displaying overlapping and spatially shifted lower-resolution sub-frames 110 from multiple projectors 112. In one embodiment, the projection of overlapping and spatially shifted sub-frames 110 gives the appearance of enhanced resolution (i.e., higher resolution than the sub-frames 110 themselves).
Sub-frame generator 108 determines appropriate values for the sub-frames 110 so that the combined image produced from sub-frames 110 prior to being inversely warped is close in appearance to how the high-resolution image (e.g., image frame 106) from which the sub-frames 110 were derived would appear if displayed directly.
It will be understood by a person of ordinary skill in the art that functions performed by sub-frame generator 108 may be implemented in hardware, software, firmware, or any combination thereof. The implementation may be via a microprocessor, programmable logic device, or state machine. Components of the embodiments described herein may reside in software on one or more computer-readable mediums. The term computer-readable medium as used herein is defined to include any kind of memory, volatile or non-volatile, such as floppy disks, hard disks, CD-ROMs, flash memory, read-only memory, and random access memory.
Also shown in
In one embodiment, rendering engine 22F includes the at least one camera 122 and a calibration unit 124, which are used in one embodiment to automatically determine a geometric mapping between each projector 112 and the reference projector 118, as described in further detail below with reference to
In one embodiment, rendering engine 22F includes hardware, software, firmware, or a combination of these. In one embodiment, one or more components of rendering engine 22F are included in a computer, computer server, or other microprocessor-based system capable of performing a sequence of logic operations. In addition, processing can be distributed throughout the system with individual portions being implemented in separate system components, such as in a networked or multiple computing unit environment.
Sub-frame 110(1) is spatially offset from sub-frame 110(2) by a predetermined distance. Similarly, sub-frame 110(3) is spatially offset from sub-frame 110(4) by a predetermined distance. In one illustrative embodiment, vertical distance 204 and horizontal distance 206 are each approximately one-half of one pixel.
The display of sub-frames 110(2), 110(3), and 110(4) are spatially shifted relative to the display of sub-frame 110(1) by vertical distance 204, horizontal distance 206, or a combination of vertical distance 204 and horizontal distance 206. As such, pixels 202 of sub-frames 110(1), 110(2), 110(3), and 110(4) at least partially overlap thereby producing the appearance of higher resolution pixels. Sub-frames 110(1), 110(2), 110(3), and 110(4) may be superimposed on one another (i.e., fully or substantially fully overlap), may be tiled (i.e., partially overlap at or near the edges), or may be a combination of superimposed and tiled. The overlapped sub-frames 110(1), 110(2), 110(3), and 110(4) also produce a brighter overall image than any of sub-frames 110(1), 110(2), 110(3), or 110(4) alone.
In other embodiments, other numbers of projectors 112 are used in rendering engine 22F and other numbers of sub-frames 110 are generated for each image frame 106.
In other embodiments, sub-frames 110(1), 110(2), 110(3), and 110(4) may be displayed at other spatial offsets relative to one another and the spatial offsets may vary over time.
In one embodiment, sub-frames 110 have a lower resolution than image frames 106. Thus, sub-frames 110 are also referred to herein as low-resolution images or sub-frames 110, and image frames 106 are also referred to herein as high-resolution images or frames 106. The terms low resolution and high resolution are used herein in a comparative fashion, and are not limited to any particular minimum or maximum number of pixels.
In one embodiment, rendering engine 22F produces a superimposed projected output that takes advantage of natural pixel mis-registration to provide a unwarped reproduction 30F with a higher resolution than the individual sub-frames 110. In one embodiment, image formation due to multiple overlapped projectors 112 is modeled using a signal processing model. Optimal sub-frames 110 for each of the component projectors 112 are estimated by sub-frame generator 108 based on the model, such that the resulting image predicted by the signal processing model is as close as possible to the desired high-resolution image to be projected. In one embodiment described in additional detail with reference to
In one embodiment, sub-frame generator 108 is configured to generate sub-frames 110 based on the maximization of a probability that, given a desired high resolution image, a simulated high-resolution image that is a function of the sub-frame values, is the same as the given, desired high-resolution image. If the generated sub-frames 110 are optimal, the simulated high-resolution image will be as close as possible to the desired high-resolution image. The generation of optimal sub-frames 110 based on a simulated high-resolution image and a desired high-resolution image is described in further detail below with reference to the embodiments of
One form of the embodiment of
Zk=HkDTYk Equation I
The low-resolution sub-frame pixel data (Yk) is expanded with the up-sampling matrix (DT) so that the sub-frames 110 (Yk) can be represented on a high-resolution grid. The interpolating filter (Hk) fills in the missing pixel data produced by up-sampling. In the embodiment shown in
In one embodiment, the geometric mapping (Fk) is a floating-point mapping, but the destinations in the mapping are on an integer grid in image 304. Thus, it is possible for multiple pixels in image 302 to be mapped to the same pixel location in image 304, resulting in missing pixels in image 304. To avoid this situation, in one embodiment, during the forward mapping (Fk), the inverse mapping (Fk−1) is also utilized as indicated at 305 in
In another embodiment, the forward geometric mapping or warp (Fk) is implemented directly, and the inverse mapping (Fk−1) is not used. In one form of this embodiment, a scatter operation is performed to eliminate missing pixels. That is, when a pixel in image 302 is mapped to a floating point location in image 304, some of the image data for the pixel is essentially scattered to multiple pixels neighboring the floating point location in image 304. Thus, each pixel in image 304 may receive contributions from multiple pixels in image 302, and each pixel in image 304 is normalized based on the number of contributions it receives.
A superposition/summation of such warped images 304 from all of the component projectors 112 forms a hypothetical or simulated high-resolution image 306 (X-hat) in the reference projector frame buffer 120, as represented in the following Equation II:
If the simulated high-resolution image 306 (X-hat) in the reference projector frame buffer 120 is identical to a given (desired) high-resolution image 308 (X), the system of component low-resolution projectors 112 would be equivalent to a hypothetical high-resolution projector placed at the same location as the reference projector 118 and sharing its optical path. In one embodiment, the desired high-resolution images 308 are the high-resolution image frames 106 (
In one embodiment, the deviation of the simulated high-resolution image 306 (X-hat) from the desired high-resolution image 308 (X) is modeled as shown in the following Equation III:
X={circumflex over (X)}+η Equation III
As shown in Equation III, the desired high-resolution image 308 (x) is defined as the simulated high-resolution image 306 (X-hat) plus η, which in one embodiment represents zero mean white Gaussian noise.
The solution for the optimal sub-frame data (Yk*) for the sub-frames 110 is formulated as the optimization given in the following Equation IV:
Thus, as indicated by Equation IV, the goal of the optimization is to determine the sub-frame values (Yk) that maximize the probability of X-hat given X. Given a desired high-resolution image 308 (X) to be projected, sub-frame generator 108 (
Using Bayes rule, the probability P(X-hat|X) in Equation IV can be written as shown in the following Equation V:
The term P(X) in Equation V is a known constant. If X-hat is given, then, referring to Equation III, X depends only on the noise term, η, which is Gaussian. Thus, the term P(X|X-hat) in Equation V will have a Gaussian form as shown in the following Equation VI:
To provide a solution that is robust to minor calibration errors and noise, a “smoothness” requirement is imposed on X-hat. In other words, it is assumed that good simulated images 306 have certain properties. The smoothness requirement according to one embodiment is expressed in terms of a desired Gaussian prior probability distribution for X-hat given by the following Equation VII:
In another embodiment, the smoothness requirement is based on a prior Laplacian model, and is expressed in terms of a probability distribution for X-hat given by the following Equation VIII:
The following discussion assumes that the probability distribution given in Equation VII, rather than Equation VIII, is being used. As will be understood by persons of ordinary skill in the art, a similar procedure would be followed if Equation VIII were used. Inserting the probability distributions from Equations VI and VII into Equation V, and inserting the result into Equation IV, results in a maximization problem involving the product of two probability distributions (note that the probability P(X) is a known constant and goes away in the calculation). By taking the negative logarithm, the exponents go away, the product of the two probability distributions becomes a sum of two probability distributions, and the maximization problem given in Equation IV is transformed into a function minimization problem, as shown in the following Equation IX:
The function minimization problem given in Equation IX is solved by substituting the definition of X-hat from Equation II into Equation IX and taking the derivative with respect to Yk, which results in an iterative algorithm given by the following Equation X:
Y
k
(n−1)
=Y
k
(n)
−Θ{DH
k
T
F
k
T└({circumflex over (X)}(n)−X)+β2∇2{circumflex over (X)}(n)┘} Equation X
Equation X may be intuitively understood as an iterative process of computing an error in the reference projector 118 coordinate system and projecting it back onto the sub-frame data. In one embodiment, sub-frame generator 108 (
To begin the iterative algorithm defined in Equation X, an initial guess, Yk(0), for the sub-frames 110 is determined. In one embodiment, the initial guess for the sub-frames 110 is determined by texture mapping the desired high-resolution frame 308 onto the sub-frames 110. In one embodiment, the initial guess is determined from the following Equation XI:
Yk(0)DBkFkTX Equation XI
Thus, as indicated by Equation XI, the initial guess (Yk(0)) is determined by performing a geometric transformation (FkT) on the desired high-resolution frame 308 (X), and filtering (Bk) and down-sampling (D) the result. The particular combination of neighboring pixels from the desired high-resolution frame 308 that are used in generating the initial guess (Yk(0)) will depend on the selected filter kernel for the interpolation filter (Bk).
In another embodiment, the initial guess, Yk(0), for the sub-frames 110 is determined from the following Equation XII
Yk(0)=DFkTX Equation XII
Equation XII is the same as Equation XI, except that the interpolation filter (Bk) is not used.
Several techniques are available to determine the geometric mapping (Fk) between each projector 112 and the reference projector 118, including manually establishing the mappings, or using camera 122 and calibration unit 124 (
F
2
=T
2
T
1
−1 Equation XIII
In one embodiment, the geometric mappings (Fk) are determined once by calibration unit 124, and provided to sub-frame generator 108. In another embodiment, calibration unit 124 continually determines (e.g., once per frame 106) the geometric mappings (Fk), and continually provides updated values for the mappings to sub-frame generator 108.
One form of the multiple color projector embodiments provides a rendering engine 22F with multiple overlapped low-resolution projectors 112 coupled with an efficient real-time (e.g., video rates) image processing algorithm for generating sub-frames 110. Multiple low-resolution, low-cost projectors 112 may be used to produce high resolution images at high lumen levels but at lower cost than existing high-resolution projection systems, such as a single, high-resolution, high-output projector. One form of the embodiments provides a scalable rendering engine 22F that can provide virtually any desired resolution and brightness by adding any desired number of component projectors 112 to rendering engine 22F.
In some existing display systems, multiple low-resolution images are displayed with temporal and sub-pixel spatial offsets to enhance resolution. There are some important differences between these existing systems and the multiple color projector embodiments. For example, in one embodiment, there is no need for circuitry to offset the projected sub-frames 110 temporally. In one embodiment, the sub-frames 110 from the component projectors 112 are projected “in-sync”. As another example, unlike some existing systems where all of the sub-frames go through the same optics and the shifts between sub-frames are all simple translational shifts, in one embodiment, the sub-frames 110 are projected through the different optics of the multiple individual projectors 112. In one form of the multiple color projector embodiments, the signal processing model that is used to generate optimal sub-frames 110 takes into account relative geometric distortion among the component sub-frames 110, and is robust to minor calibration errors and noise.
It can be difficult to accurately align projectors into a desired configuration. In one form of the multiple color projector embodiments, regardless of what the particular projector configuration is, even if it is not an optimal alignment, sub-frame generator 108 determines and generates optimal sub-frames 110 for that particular configuration.
Algorithms that seek to enhance resolution by offsetting multiple projection elements have been previously proposed. These methods assume simple shift offsets between projectors, use frequency domain analyses, and rely on heuristic methods to compute component sub-frames. In contrast, one form of the multiple color projector embodiments utilizes an optimal real-time sub-frame generation algorithm that explicitly accounts for arbitrary relative geometric distortion (not limited to homographies) between the component projectors 112, including distortions that occur due to a display surface 116 that is non-planar or has surface non-uniformities. One form of the multiple color projector embodiments generates sub-frames 110 based on a geometric relationship between a hypothetical high-resolution reference projector 118 at any arbitrary location and each of the actual low-resolution projectors 112, which may also be positioned at any arbitrary location.
In one embodiment, rendering engine 22F is configured to project images that have a three-dimensional (3D) appearance. In 3D image display systems, two images, each with a different polarization, are simultaneously projected by two different projectors. One image corresponds to the left eye, and the other image corresponds to the right eye. Conventional 3D image display systems typically suffer from a lack of brightness. In contrast, with one embodiment described herein, a first plurality of the projectors 112 may be used to produce any desired brightness for the first image (e.g., left eye image), and a second plurality of the projectors 112 may be used to produce any desired brightness for the second image (e.g., right eye image). In another embodiment, rendering engine 22F may be combined or used with other display systems or display techniques, such as tiled displays.
Naïve overlapped projection of different colored sub-frames 110 by different projectors 112 can lead to significant color artifacts at the edges due to misregistration among the colors. In the embodiments of
Zik=HiDiTYik Equation XIV
Yik=kth low-resolution sub-frame 110 in the ith color plane.
The low-resolution sub-frame pixel data (Yik) is expanded with the up-sampling matrix (DiT) so that the sub-frames 110 (Yik) can be represented on a high-resolution grid. The interpolating filter (Hi) fills in the missing pixel data produced by up-sampling. In the embodiment shown in
In one embodiment, the geometric mapping (Fik) is a floating-point mapping, but the destinations in the mapping are on an integer grid in image 404. Thus, it is possible for multiple pixels in image 402 to be mapped to the same pixel location in image 404, resulting in missing pixels in image 404. To avoid this situation, in one embodiment, during the forward mapping (Fik), the inverse mapping (Fik−1) is also utilized as indicated at 405 in
In another embodiment, the forward geometric mapping or warp (Fk) is implemented directly, and the inverse mapping (Fk−1) is not used. In one form of this embodiment, a scatter operation is performed to eliminate missing pixels. That is, when a pixel in image 402 is mapped to a floating point location in image 404, some of the image data for the pixel is essentially scattered to multiple pixels neighboring the floating point location in image 404. Thus, each pixel in image 404 may receive contributions from multiple pixels in image 402, and each pixel in image 404 is normalized based on the number of contributions it receives.
A superposition/summation of such warped images 404 from all of the component projectors 112 in a given color plane forms a hypothetical or simulated high-resolution image (X-hati) for that color plane in the reference projector frame buffer 120, as represented in the following Equation XV:
A hypothetical or simulated image 406 (X-hat) is represented by the following Equation XVI:
{circumflex over (X)}=[{circumflex over (X)}1 {circumflex over (X)}2 . . . {circumflex over (X)}N]T Equation XVI
If the simulated high-resolution image 406 (X-hat) in the reference projector frame buffer 120 is identical to a given (desired) high-resolution image 408 (X), the system of component low-resolution projectors 112 would be equivalent to a hypothetical high-resolution projector placed at the same location as the reference projector 118 and sharing its optical path. In one embodiment, the desired high-resolution images 408 are the high-resolution image frames 106 (
In one embodiment, the deviation of the simulated high-resolution image 406 (X-hat) from the desired high-resolution image 408 (X) is modeled as shown in the following Equation XVII:
X={circumflex over (X)}+η Equation XVII
As shown in Equation XVII, the desired high-resolution image 408 (X) is defined as the simulated high-resolution image 406 (X-hat) plus η, which in one embodiment represents zero mean white Gaussian noise.
The solution for the optimal sub-frame data (Yik*) for the sub-frames 110 is formulated as the optimization given in the following Equation XVIII:
Thus, as indicated by Equation XVIII, the goal of the optimization is to determine the sub-frame values (Yik) that maximize the probability of X-hat given X. Given a desired high-resolution image 408 (X) to be projected, sub-frame generator 108 (
Using Bayes rule, the probability P(X-hat|X) in Equation XVIII can be written as shown in the following Equation XIX:
The term P(X) in Equation XIX is a known constant. If X-hat is given, then, referring to Equation XVII, X depends only on the noise term, η, which is Gaussian. Thus, the term P(X|X-hat) in Equation XIX will have a Gaussian form as shown in the following Equation XX:
To provide a solution that is robust to minor calibration errors and noise, a “smoothness” requirement is imposed on X-hat. In other words, it is assumed that good simulated images 406 have certain properties. For example, for most good color images, the luminance and chrominance derivatives are related by a certain value. In one embodiment, a smoothness requirement is imposed on the luminance and chrominance of the X-hat image based on a “Hel-Or” color prior model, which is a conventional color model known to those of ordinary skill in the art. The smoothness requirement according to one embodiment is expressed in terms of a desired probability distribution for X-hat given by the following Equation XXI:
In another embodiment, the smoothness requirement is based on a prior Laplacian model, and is expressed in terms of a probability distribution for X-hat given by the following Equation XXII:
The following discussion assumes that the probability distribution given in Equation XXI, rather than Equation XXII, is being used. As will be understood by persons of ordinary skill in the art, a similar procedure would be followed if Equation XXII were used. Inserting the probability distributions from Equations VII and VIII into Equation XIX, and inserting the result into Equation XVIII, results in a maximization problem involving the product of two probability distributions (note that the probability P(X) is a known constant and goes away in the calculation). By taking the negative logarithm, the exponents go away, the product of the two probability distributions becomes a sum of two probability distributions, and the maximization problem given in Equation XVIII is transformed into a function minimization problem, as shown in the following Equation XXIII:
The function minimization problem given in Equation XXIII is solved by substituting the definition of X-hati from Equation XV into Equation XXIII and taking the derivative with respect to Yik, which results in an iterative algorithm given by the following Equation XXIV:
Equation XXIV may be intuitively understood as an iterative process of computing an error in the reference projector 118 coordinate system and projecting it back onto the sub-frame data. In one embodiment, sub-frame generator 108 (
To begin the iterative algorithm defined in Equation XXIV, an initial guess, Yik(0), for the sub-frames 110 is determined. In one embodiment, the initial guess for the sub-frames 110 is determined by texture mapping the desired high-resolution frame 408 onto the sub-frames 110. In one embodiment, the initial guess is determined from the following Equation XXV:
Yik(0)=DiBiFikTXi Equation XXV
Thus, as indicated by Equation XXV, the initial guess (Yik(0)) is determined by performing a geometric transformation (FikT) on the ith color plane of the desired high-resolution frame 408 (Xi), and filtering (Bi) and down-sampling (Di) the result. The particular combination of neighboring pixels from the desired high-resolution frame 408 that are used in generating the initial guess (Yik(0)) will depend on the selected filter kernel for the interpolation filter (Bi).
In another embodiment, the initial guess, Yik(0), for the sub-frames 110 is determined from the following Equation XXVI:
Yik(0)DiFikTXi Equation XXVI
Equation XXVI is the same as Equation XXV, except that the interpolation filter (Bk) is not used.
Several techniques are available to determine the geometric mapping (Fik) between each projector 112 and the reference projector 118, including manually establishing the mappings, or using camera 122 and calibration unit 124 (
F
2
=T
2
T
−1 Equation XXVII
In one embodiment, the geometric mappings (Fik) are determined once by calibration unit 124, and provided to sub-frame generator 108. In another embodiment, calibration unit 124 continually determines (e.g., once per frame 106) the geometric mappings (Fik), and continually provides updated values for the mappings to sub-frame generator 108.
One form of the single color projector embodiments provides a rendering engine 22F with multiple overlapped low-resolution projectors 112 coupled with an efficient real-time (e.g., video rates) image processing algorithm for generating sub-frames 110. In one embodiment, multiple low-resolution, low-cost projectors 112 are used to produce high resolution images at high lumen levels, but at lower cost than existing high-resolution projection systems, such as a single, high-resolution, high-output projector. One embodiment provides a scalable rendering engine 22F that can provide virtually any desired resolution, brightness, and color, by adding any desired number of component projectors 112 to rendering engine 22F.
In some existing display systems, multiple low-resolution images are displayed with temporal and sub-pixel spatial offsets to enhance resolution. There are some important differences between these existing systems and the single color projector embodiments. For example, in one embodiment, there is no need for circuitry to offset the projected sub-frames 110 temporally. In one embodiment, the sub-frames 110 from the component projectors 112 are projected “in-sync”. As another example, unlike some existing systems where all of the sub-frames go through the same optics and the shifts between sub-frames are all simple translational shifts, in one embodiment, the sub-frames 110 are projected through the different optics of the multiple individual projectors 112. In one form of the single color projector embodiments, the signal processing model that is used to generate optimal sub-frames 110 takes into account relative geometric distortion among the component sub-frames 110, and is robust to minor calibration errors and noise.
It can be difficult to accurately align projectors into a desired configuration. In one embodiment of the single color projector embodiments, regardless of what the particular projector configuration is, even if it is not an optimal alignment, sub-frame generator 108 determines and generates optimal sub-frames 110 for that particular configuration.
Algorithms that seek to enhance resolution by offsetting multiple projection elements have been previously proposed. These methods assume simple shift offsets between projectors, use frequency domain analyses, and rely on heuristic methods to compute component sub-frames. In contrast, one embodiment described herein utilizes an optimal real-time sub-frame generation algorithm that explicitly accounts for arbitrary relative geometric distortion (not limited to homographies) between the component projectors 112, including distortions that occur due to a display surface 116 that is non-planar or has surface non-uniformities. One form of the single color projector embodiments generates sub-frames 110 based on a geometric relationship between a hypothetical high-resolution reference projector 118 at any arbitrary location and each of the actual low-resolution projectors 112, which may also be positioned at any arbitrary location.
One form of the single color projector embodiments provides a rendering engine 22F with multiple overlapped low-resolution projectors 112, with each projector 112 projecting a different colorant to compose a full color high-resolution unwarped reproduction 30F on display surface 116 with minimal color artifacts due to the overlapped projection. By imposing a color-prior model via a Bayesian approach as is done in one embodiment, the generated solution for determining sub-frame values minimizes color aliasing artifacts and is robust to small modeling errors.
Using multiple off the shelf projectors 112 in rendering engine 22F allows for high resolution. However, if the projectors 112 include a color wheel, which is common in existing projectors, rendering engine 22F may suffer from light loss, sequential color artifacts, poor color fidelity, reduced bit-depth, and a significant tradeoff in bit depth to add new colors. One embodiment eliminates the need for a color wheel, and uses in its place, a different color filter for each projector 112 as shown in
Rendering engine 22F is also very efficient from a processing perspective since, in one embodiment, each projector 112 only processes one color plane. For example, each projector 112 reads and renders only one-fourth (for RGBY) of the full color data in one embodiment.
In one embodiment, rendering engine 22F is configured to project images that have a three-dimensional (3D) appearance. In 3D image display systems, two images, each with a different polarization, are simultaneously projected by two different projectors. One image corresponds to the left eye, and the other image corresponds to the right eye. Conventional 3D image display systems typically suffer from a lack of brightness. In contrast, with one embodiment, a first plurality of the projectors 112 may be used to produce any desired brightness for the first image (e.g., left eye image), and a second plurality of the projectors 112 may be used to produce any desired brightness for the second image (e.g., right eye image). In another embodiment, rendering engine 22F may be combined or used with other display systems or display techniques, such as tiled displays.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.