The present invention generally relates to photometric compensation, and more particularly to a photometric compensation method and system for see-through devices.
As a tool for augmented reality, see-through smart glasses enable a user to receive additional information about the surrounding real world in the form of image, which is projected from an embedded projector. The user can see both the projected image and the real world scene. Fun and interactive user experiences can be created because the augmented visual information is digitally manipulable.
But the small projectors of most smart glasses have much lower power than traditional projectors. As the projected image is blended with the scene, photometric distortion can easily occur if the projector irradiance is only comparable to, or weaker than, the irradiance of the light coming from the scene and incident on the retina of the user. Such photometric distortion is a major image quality issue of smart glasses.
Although it is the scene light that introduces the photometric distortion, the properties of the scene light, the projector, and the reflectance of smart glasses must be determined if we want to eliminate the photometric distortion. This can be solved by using a camera and a set of calibration patterns. The projector projects images for augmentation or calibration into the user's eye, and the camera is responsible for capturing images of the scene.
However, the approach requires a new round of photometric calibration whenever there is any scene change in the field of view of the smart glasses or whenever the user moves. This may disrupt user interaction. Another issue is efficiency. Projecting and processing the calibration patterns takes time. Typically, the time required for these operations ranges from few seconds to tens of seconds. Obviously, this is not acceptable for real-time applications. A need has thus arisen to propose a novel scheme to overcome disadvantages of the conventional approach.
In view of the foregoing, it is an object of the embodiment of the present invention to provide a photometric compensation method and system for see-through devices. In one embodiment, an algorithm capable of photometric compensation based on the distorted image is proposed. It only requires photometric calibration once. Each subsequent compensation operation is based on the distorted image captured at each time instance. Real-time photometric compensation is achieved without re-calibration.
According to one embodiment, a photometric model is provided that a total response is a sum of a response to a device light from the see-through device and a response to a scene light from a scene. A calibration stage is performed in a transformed domain, which is only related to characteristics of a projector and an image capturing device of the see-through device. A compensation stage is performed to obtain a response for an original image in a dark room, thereby determining a response for a compensated image according to the response for the original image and the response to the scene light. The compensated image is generated according to the response for the compensated image.
In the embodiment, a photometric model is first provided. Conventional photometric models assume that the scene light either remains constant or is negligible comparing to the device light. However, in the photometric model of the embodiment, both the device light and the scene light have to be considered. The photometric model of the embodiment may be expressed in the vector form as
T(I,S)=C(I)+C(S)=MG(I)+C(S) (1)
where T(I,S) is a total camera response, C(I) is a camera response to the device light, C(S) is a camera response to the scene light, M describes channel mismatch between the projector 11 and the camera 12, and G(•) is a gamma function of the projector 11.
A calibration stage is performed (in step 23 by a calibration device 13) in a dark room to block the scene light so that we can directly obtain the camera response to the device light. For this camera configuration, (1) becomes
T(I,S)=C(I)=MG(I). (2)
It is generally difficult to solve for M and G(•) directly because the unknowns are coupled. According to one aspect of the embodiment, the calibration stage is performed in a transformed domain by a channel decoupling unit 131 such that (2) can be expressed as
T(I,S)=MG(I)={tilde over (M)}V(I) (3)
where {tilde over (M)} is a decoupling transformation and is only related to the characteristics of the projector 11 and the camera 12, and V(•) is a scaled gamma function.
Note that we convert the problem of determining M and G(•) to that of determining {tilde over (M)} and V(•). Therefore, it only has to be computed once regardless that the scene or image dynamically changes. To speed up the calibration process, a look up table for V(•) may be constructed.
To be more specific, each channel X of the decoupled camera response {tilde over (T)}(I,S) can be written as
{tilde over (T)}
X(I,S)={tilde over (C)}X(IX)=MXXGX(IX)≡VX(IX) (4)
where Xε{R, G, B}, Vx(•) is defined as the scaled gamma function.
Accordingly, obtaining {tilde over (M)} and V(•) is equivalent to obtaining M and G(•). Details of solving {tilde over (M)} may be referred to “Making One Object Look Like Another: Controlling Appearance Using a Projector-Camera System,” entitled to M. D. Grossberg et al., Proc. IEEE CVPR 2004, vol. 1, pp 452-459, 2004, the disclosure of which is incorporated herein by reference.
Subsequently, a photometric compensation stage is performed (by a compensation device 14). Specifically speaking, the total camera response for an original image is
T(IO,S)=C(IO)+C(S). (5)
The total camera response for a compensated image is
T(IC,S)=C(IC)+C(S). (6)
In the photometric compensation, it is desired that the total camera response T(IC,S) for the compensated image is equal to the camera response C(IO) for the original image in the dark room, that is
T(IC,S)=C(IC)+C(S)=C(IO). (7)
To obtain C(IC), we need to know C(IO) and C(S). C(IO) is obtained (in step 24 by a luminance generating unit 141) by
C(IO)={tilde over (M)}V(I). (8)
On the other hand, C(S) can be obtained (in step 25 by a scene generating unit 142) from (5) since T(IO,S) and C(IO) are known. Therefore, the camera response C(IC) for the compensated image can be determined according to C(IO) and C(S) (in step 26 by a compensation determination unit 143).
Once {tilde over (C)} (IC) is obtained, IC is obtained (in step 27 by a compensated image generating unit 144) by
According to the embodiment, a method capable of compensating the photometric distortion for see-through smart glasses is proposed. Since only the distorted image is used in the photometric compensation process, our method does not require re-calibration and hence does not interrupt the user interaction. Accordingly, our method is able to achieve real-time performance for most augmented reality applications using smart glasses. The method works well when the scene light is comparable to the device light in intensity. When the scene light is much weaker, photometric distortion is negligible. On the other hand, when the scene light is much stronger than the device light, it is difficult to restore the image by photometric compensation. In this case, one may either place a “sunglasses” to reduce the scene light or seek a projector with higher power for the smart glasses.
Although specific embodiments have been illustrated and described, it will be appreciated by those skilled in the art that various modifications may be made without departing from the scope of the present invention, which is intended to be limited solely by the appended claims.