1. Field of the Invention
The present invention generally relates to an image projector, and more particularly to architecture of generating view-dependent compensated images for a non-Lambertian surface.
2. Description of Related Art
Using an image projector in a mobile phone or digital camera greatly overcomes the screen size limitation of the handheld device and allows the image to be conveniently projected onto a bigger area on any nearby surface, such as a wall. Ideally, we would like the handheld projector to be able to project a clear image regardless of the physical characteristics of the projection surface. In practice, however, the projection surface available in the surroundings is often far from ideal and causes distortions to the projected image. As a result, compensation must be applied to the image before projection to counteract the non-ideal characteristics of the projection surface.
One fundamental assumption of most compensation techniques is that the camera is placed where the viewer is supposed to be. This assumption is easily violated since a projector-camera (procam) device can be placed at an angle with respect to the viewer. This nullifies the compensation calculated based on the assumption that the viewer and the camera are aligned or collocated at the same place. Thus, for non-Lambertian screens one should design a more general compensation algorithm that takes the viewing direction into consideration.
As a result, a need has arisen to propose a novel scheme of generating view-dependent compensated images for a non-Lambertian surface.
In view of the foregoing, it is an object of the embodiment of the present invention to provide a method of generating view-dependent compensated images for a non-Lambertian surface with multifold advantages. First, the embodiment provides a simple scheme that does not require additional projectors or cameras to reconstruct the reflection property of the surface—only one camera and one projector suffice. Second, the embodiment predicts the calibration images for different viewing angles from those captured at a single viewing angle, which greatly extends the capability of a procam system. Third, the embodiment introduces a feedback to re-estimate the specular light iteratively, which avoids over-compensation.
According to one embodiment, a procam system comprised of a projector and a camera is provided. A uniform image is projected on a reflective screen, resulting in a first captured image. The distribution of specular highlight is predicted according to the first captured image, thereby obtaining model parameters. Calibration images are estimated according to the model parameters and a viewing angle. A compensated image is generated according to the calibration images at the viewing angle.
As shown in
While most of light is evenly scattered, a portion of light rays directly reflect as if the surface of the reflective screen 23 is a mirror. The mirror-like reflection of light is commonly known as specular highlight. The architecture 100, therefore, utilizes a unit for predicting distribution of the specular highlight. The embodiment adopts Phong model, as disclosed by B. T. Phong, “Illumination for computer generated pictures,” Communications of the ACM, vol. 18, no. 6, pp. 311-317, 1975, the disclosure of which is incorporated herein by reference.
Specifically, the specular highlight predicting unit 12 includes a luminance normalization sub-unit 121A for normalizing the (first) captured image LU,90, resulting in a (first) normalized captured image {circumflex over (L)}U,90, with value ranging from 0 to 1, denoting spatial variation of the luminance. In addition to the specular highlight, the luminance variation is also caused by vignetting, introduced by imperfection of lens, which often results in luminance reduction at the periphery of a photo. Therefore, vignetting factor need be estimated and excluded before reconstructing the Phong model.
Specifically, the vignetting effect may be calculated, in unit 11B, by projecting the same uniform image U onto an ideal projection screen (not shown), which is assumed to be highly, if not perfectly, diffusive, resulting in a (second) captured image Q. The (second) captured image Q is then normalized, by a luminance normalization sub-unit 121B, to obtain a (second) normalized captured image {circumflex over (Q)}, which is the luminance variation caused by pure vignetting. Subsequently, in specular highlight extraction sub-unit 122, specular highlight S90 may be extracted by
Afterwards, in Phong model parameters fitting sub-unit 123, Phong model parameters ks and γ may be obtained, for example, by linear regression, for describing the specular highlight Is as
I
s
=k
s({circumflex over (R)}−{circumflex over (V)})γis
where ks is a specular reflection constant, {circumflex over (R)} is a direction of a perfectly reflected light, {circumflex over (V)} is a direction toward the viewer 24, γ is a shininess constant for screen material, and is is intensity of a light source.
After obtaining parameters (e.g., ks and γ) of model for predicting distribution of the specular highlight, the process of the architecture 100 proceeds to a unit 13 of estimating calibration images. Specifically, in a specular highlight reconstruction sub-unit 131, a specular highlight Sθ at an arbitrary viewing angle θ may be reconstructed or predicted based on the model parameters and the viewing angle θ, obtained, for example, by a sensor 14. Subsequently, in an image color estimation sub-unit 132, luminance difference Dθ between the specular highlight seen at 90° C. and θ is first generated, (that is, Dθ=sL(Sθ−S90), where sL is a scaling factor), and plural calibration images at θ are estimated by adding the luminance difference Dθ (of S90 and Sθ) to calibration images at 90°. That is,
L
M,θ
=L
M,90
+n(Sθ−S90), Mε{R, G, B, U, S}
where n is a scaling factor.
In the embodiment, the calibration images LM,θ comprise four uniform-colored images (red, green, blue and gray) and one color ramp image consisting of pixels from gray-level 0 to gray-level 255.
Still referring to
The luminance of the projected light determines the chroma and the intensity of the specular highlight. In particular, when a compensated image is projected, the specular highlight slightly differs from that estimated in the initial condition, under which the calibration image is projected. This often leads to over-compensation. Accordingly, as shown in
where Ū and Īc are average luminance of U and Ic, respectively, for example, by iterating the feedback loop three times.
Although specific embodiments have been illustrated and described, it will be appreciated by those skilled in the art that various modifications may be made without departing from the scope of the present invention, which is intended to be limited solely by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 61/697,697, filed on Sep. 6, 2012 and entitled “Compensating Specular Highlights for Non-Lambertian Projection Surfaces,” the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61697697 | Sep 2012 | US |