This non-provisional application claims priority of Taiwan patent application No. 109116519, filed on 19th May, 2020, included herein by reference in its entirety.
The invention relates to image processing, and specifically, to projection methods of a projection system.
A projector is an optical device that projects an image onto a projection surface. In practice, images on the projection surface may be distorted owing to the projector being tilted or the projector projecting onto an uneven or inclined surface. Conventionally, the projector may adopt a keystone correction by way of manual positioning and observation to achieve the optimal viewing projection correction. When the projection surface is uneven or curved, the traditional method cannot overcome the problem of image distortion. If the projection screen is too large and several projectors are needed for projection, a projection method will further be in need to resolve the problem of distortion correction amid joint projection.
According to one embodiment of the invention, a projection method for use in a projection system is provided. The projection system includes a projector, a camera and a processor, the projector and the camera are disposed separately. The projection method includes the projector projecting a projection image onto a projection surface, the camera capturing a display image on the projection surface, the processor generating, according to a plurality of feature points in the projection image and a plurality of corresponding feature points in the display image, a transformation matrix of the plurality of feature points and the plurality of corresponding feature points, the processor pre-warping a set of projection image data according to the transformation matrix to generate a set of pre-warped image data, and the projector projecting a pre-warped image onto the projection surface according to the set of pre-warped image data.
According to another embodiment of the invention, projection method for use in a projection system is disclosed. The projection system includes a projector, a depth sensor, an inertia measurement unit and a processor. The depth sensor and the inertia measurement unit are fixed at the projector. The projection method includes the inertia measurement unit performing a 3-axis acceleration measurement to generate an orientation of the projector, the depth sensor detecting a plurality of coordinates of a plurality of points on a projection surface with respect to a reference point, the processor performing a keystone correction according to at least the plurality of coordinates of the plurality of points on the projection surface to generate a calibrated projection region, the processor generating a set of data corresponding to a 3D-to-2D coordinate projective transformation according to at least the orientation of the projector, the calibrated projection region and the plurality of coordinates, and the projector projecting a pre-warped image onto the projection surface according to the set of data corresponding to the 3D-to-2D coordinate projective transformation.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Step S202: the projector 10 projects the projection image onto the projection surface 16;
Step S204: the camera 12 captures the display image on the projection surface 16;
Step S206: the processor 14 generates, according to the plurality of feature points in the projection image and the plurality of corresponding feature points in the display image, a transformation matrix of the plurality of feature points and the plurality of corresponding feature points;
Step S208: the processor 14 pre-warps the set of projection image data according to the transformation matrix to generate the set of pre-warped image data;
Step S210: the projector 10 projects the pre-warped image onto the projection surface 16 according to the set of pre-warped image data.
The following embodies Steps S202 to S210 by
In step S208, the set of projection image data corresponds to the display image 42, since the projection surface 16 is not a flat surface, the set of projection image data must be pre-warped by the transformation matrix to generate the set of pre-warped image data corresponding to the pre-warped image 40, thereby forming the display image 42 on the uneven projection surface 16 without distortion.
The inertia measurement unit 104 may be an accelerometer, a gyroscope, or other rotation angle sensing devices. The inertia measurement unit 104 may perform a three-axis acceleration measurement to generate an orientation of the projector 10. The orientation of the projector 10 includes the three-dimensional rotation angles of the projector 10, and the three-dimensional rotation angles may be expressed in quaternion angles, Rodrigues' rotation formula, or Euler angles. The depth sensor 102 may be a camera, a 3-dimensional time-of-flight (3D ToF) sensor, or other devices that can detect multi-point distances on an object so as to detect a configuration of the projection surface 16. The processor 14 may correct the distortion resulting from the tilt angle of the projector 10 according to the orientation of the projector 10, and may perform a keystone correction according to the configuration of the projection surface 16 to correct the distortion resulting from the configuration of the projection surface 16, enabling the digital light processing device 100 to generate a pre-warped image for the projector 10 to form a display image on the projection surface 16 perceived by the human eye as rectangular and undistorted.
The projection system S5 may employ a projection method 600 to correct an image distortion.
Step S602: the inertia measurement unit 104 performs the three-axis acceleration measurement to generate the orientation of the projector 10;
Step S604: the depth sensor 102 detects a plurality of coordinates of a plurality of points on a projection surface 16 with respect to a reference point;
Step S606: the processor 14 performs a keystone correction according to at least the plurality of coordinates of the plurality of points on the projection surface 16 to generate a first calibrated projection region;
Step S608: the processor 14 generates a set of image data according to at least the orientation of the projector 10, the first calibrated projection region and the plurality of coordinates of the plurality of points on the projection surface 16;
Step S610: the projector 10 projects the pre-warped image onto the projection surface 16 according to the set of image data.
The projection method 600 may be described by a pinhole camera model.
where s is a normalized scalar factor;
(u, v) are two-dimensional coordinates in the image plane 70;
(X, Y, Z) are three-dimensional coordinates on the projection surface 16;
is referred to as an intrinsic parameter matrix;
is referred to as an extrinsic parameter matrix, including a rotational transformation matrix
and a translational transformation matrix
fx is the focal length in the x-axis direction;
fy is the focal length in the y-axis direction;
cx is the x coordinate of a principle point;
cy is the y coordinate of the principle point;
r11 to r33 are rotational transformation vectors; and
t1 to t3 are translational transformation vectors.
According to Equation (1), the pre-warped image point p(u, v) on the image plane 70 may be generated by the intrinsic parameter matrix, the extrinsic parameter matrix, and the ideal focal point P(X, Y, Z) on the projection surface 16. The intrinsic parameter matrix contains a set of fixed internal projector parameters. For a single focal length of projector 10, the intrinsic parameter matrix is fixed. The extrinsic parameter matrix may be generated by the orientation of the projector 10, and the ideal focal point P(X, Y, Z) on the projection surface 16 may be generated by the configuration of the projection surface 16.
In Step S602, The inertia measurement unit 104 performs the three-axis acceleration measurement to generate the orientation of the projector 10. In this embodiment, the orientation of the projector 10 may be expressed by the Euler angles θx, θy, θz, or may be expressed in other ways.
In Step S604, the depth sensor 102 detects a plurality of three-dimensional coordinates of a plurality of points on the projection surface 16 with respect to the reference point to obtain the configuration of the projection surface 16. The configuration of the projection surface 16 may be defined by the plurality of three-dimensional coordinates of the plurality of points on the projection surface 16. The reference point may be set at the depth sensor 102, at the focal point Fc of the projection lens of the projector 10, or between the depth sensor 102 and the focal point Fc. Since the projection surface 16 may be an uneven surface, the projection region of the projector 10 on the projection surface 16 may be affected by the configuration of the projection surface 16 and may be non-rectangular in shape. Therefore, in Step S606, the processor 14 performs the three-dimensional keystone correction according to the configuration of the projection surface 16, so as to generate a corrected projection region on the projection surface 16. Specifically, the processor 14 may determine the projection region of the projector 10 on the projection surface 16 according to the plurality of coordinates of the plurality of points on the projection surface 16 and the horizontal viewing angle and the vertical viewing angle of the projector 10, and determine a rectangular region within the projection region as a corrected projection region. The rotation angle of the rectangular region with respect to the horizontal line may be 0 degrees. The corrected projection region may be defined by the three-dimensional space coordinates on the projection surface 16. In some embodiments, the corrected projection region may be the largest rectangular region within the projection region. In other embodiments, the corrected projection region may be the largest rectangular region with a predetermined aspect ratio within the projection region. For example, the predetermined aspect ratio of the rectangular region may be 4:3, 16:9, or other ratios.
In step S608, the processor 14 generates the extrinsic parameter matrix according to the orientation of the projector 10. The processor 14 may generate the rotational transformation matrix of the extrinsic parameter matrix according to the Euler angles θx, θy, θz. The rotational transformation matrix includes a set of three-axis rotational transformation vectors r11 to r33, as expressed by Equation (2):
The processor 14 may generate translational transformation vectors t1 to t3 according to the location of the depth sensor 102 with respect to the reference point. When the reference point of the world coordinate system is set at the focal point Fc of the projector 10, or between the focal point Fc of the projector 10 and the depth sensor 102, the translational transformation vectors t1 to t3 are fixed in values, resulting in a fixed translational transformation matrix of the extrinsic parameter matrix. When the reference point of the world coordinate system is set at the depth sensor 102, the translational transformation vectors t1 to t3 are all 0, the transformation between the ideal focal point P(X, Y, Z) on the projection surface 16 and the pre-warped image points p(u, v) on the image plane 70 may be expressed by Equation (3):
The extrinsic parameter matrix only contains a set of three-axis rotational transformation vectors r11 to r33.
In Step S608, the processor 14 further generates the ideal focal point P(X, Y, Z) on the projection surface 16 according to the coordinates of the corrected projection region 84 and the complex points of the projection surface 16. In some embodiments, the processor 14 may fit the projection image data to the plurality of coordinates of the plurality of points on the projection surface 16 in the corrected projection region 84 to obtain a plurality of ideal focal points. Then the processor 14 substitutes the intrinsic parameter matrix, the extrinsic parameter matrix and the plural ideal focal points into Equation (1) or Equation (3) to obtain a set of images data of the plurality of pre-warped image points in the pre-warped image on the image plane 70. The set of image data is the corresponding data of transforming three-dimensional spatial data into two-dimensional image coordinates.
Finally, in Step S610, the projector 10 projects the pre-warped image onto the projection surface 16 according to the set of image data, so as to form the corrected projection image on the projection surface 16 that is perceived by the human eye as rectangular and undistorted.
In some embodiments, in step S604, the configuration of the projection surface 16 may be detected by a binocular vision method. When the binocular vision method is used, the depth sensor 102 may be a camera. The camera may have a high resolution and may be suitable for detecting a projection surface 16 having a complicated configuration, such as a curved projection surface 16. The binocular vision method simulates how the scene is processed by human eyes. Specifically, the binocular vision method includes observing the same feature point on the projection surface 16 from two locations, obtaining from each location a two-dimensional image of the same feature point, and then performing a matching operation according to the image data of the respective two-dimensional images to reconstruct the three-dimensional coordinates of the object. The three-dimensional coordinates contain the depth information of the object, thereby generating the configuration of the projection surface 16. The projection system S5 employs the projector 10 and the depth sensor 102 as two image capture devices in the binocular vision method to acquire two-dimensional images of the same feature point from two locations. The projector 10 projects the first projection image onto the projection surface 16, the camera receives the reflected image reflected from the projection surface 16, and the processor 14 generates the plurality of points on the projection surface 16 with respect to the reference point according to the first projection image and the reflected image, so as to define the configuration of the projection surface 16. The first projection image may include a plurality of calibration spots or other correction patterns.
where r11a to r33a are the rotational transformation vectors of the digital light processing device 100, t1a to t3a are the translational transformation vectors of the digital light processing device 100, r11b to r33b are the rotational transformation vectors of the image sensor, and t1b to t3b are the translational transformation vectors of the image sensor. According to Equation (1), the pinhole camera model Equation (6) and Equation (7) of the digital light processing device 100 and the image sensor can be obtained respectively as follows:
Substitute Equation (4) into Equation (6) to obtain Equation (8):
r
11
a
X+r
12
a
Y+r
13
a
Z+t
1
a
−r
31
a
u
a
X−r
32
a
u
a
Y−r
33
a
u
a
Z=t
3
a
u
a
r
21
a
X+r
22
a
Y+r
23
a
Z+t
2
a
−r
31
a
v
a
X−r
32
a
v
a
Y−r
33
a
v
a
Z=t
3
a
v
a Equation (8)
Substitute Equation (5) into Equation (7) to obtain Equation (9):
r
11
b
X+r
12
b
Y+r
13
b
Z+t
1
b
−r
31
b
u
b
X−r
32
b
u
b
Y−r
33
b
u
b
Z=t
3
b
u
b
r
21
b
X+r
22
b
Y+r
23
b
Z+t
2
b
−r
31
b
v
b
X−r
32
b
v
b
Y−r
33
b
v
b
Z=t
3
b
v
b Equation (9)
Geometrically, Equation (8) and Equation (9) represent the line from the focal point Oa to the feature point P and the line from the focal point Ob to the feature point P, respectively, and the intersection of the two lines is the solution of the three-dimensional coordinates (X, Y, Z) of the feature point P. The processor 14 may generate a plurality of three-dimensional coordinates of the plurality of feature points on the projection surface 16 according to the plurality of projection points on the image plane of the digital light processing device 100 and the plurality of corresponding projection points on the image plane of the image sensor, thereby defining the configuration of the projection surface 16.
In some other embodiments, in Step S604, the configuration of the projection surface 16 may be detected using a time-of-flight ranging method. When the three-dimensional time-of-flight method is used, the depth sensor 102 may be a three-dimensional time-of-flight sensor. Compared to the camera, the 3D time-of-flight distance sensor may have a lower resolution and a faster detection speed, and may be suitable for detecting the projection surface 16 with a simple configuration, such as a flat projection surface 16. The three-dimensional time-of-flight method may include obtaining distances between the feature point P of an object in a specific field of view (FoV) and the three-dimensional time-of-flight sensor, and forming a plane by 3 arbitrary points, so as to derive the configuration of the projection surface 16. When the three-dimensional time-of-flight method is used, the three-dimensional time-of-flight sensor transmits a transmission signal to the projection surface 16, and receives a reflection signal reflected by the projection surface 16 in response to the transmission signal, and the processor 14 generates the plurality of coordinates of the plurality of points on the projection surface 16 with respect to the reference point according to time difference between the transmission signal and the reflection signal, thereby defining the configuration of the projection surface 16.
The projection system S5 and the projection method 600 employ a depth sensor and an inertia measurement unit fixed at the projector to generate the orientation of the projector and to detect the configuration of the projection surface, correct distortion owing to the tilted projector according to the orientation of the projector, perform the keystone correct according to the configuration of the projection surface to correct the distortion owing to the configuration of the projection surface, thereby pre-warping the image, so as to project the pre-warped image onto the projection surface to form the corrected projection image that is rectangular and free of distortion.
The projection system S10 is different from the projection system S5 in that the projection surface 14 may perform the keystone correction according to the configuration of the first projection surface 16a and the configuration of the second projection surface 16b to generate the first calibrated projection region and a second calibrated projection region. In some embodiments, a distance between the first projector 10a and the second projector 10b may be measured in advance, the processor 14 may perform the keystone correction according to the distance between the first projector 10a and the second projector 10b, the configuration of the first projection surface 16a and the configuration of the second projection surface 16b to generate the first corrected projection region and the second corrected projection region. For image correction of the first projector 10a, the processor 14 may generate a first pre-warped image according to the orientation of the first projector 10a, the plurality of coordinates of the plurality of points on the first projection surface 16a with respect to the first reference point, for the first projector 10a to project the first pre-warped image onto the first projection surface 16a to form a first corrected projection image that is free of distortion. Similarly, for image correction of the second projector 10b, the processor 14 may generate a second pre-warped image according to the orientation of the second projector 10b, the plurality of coordinates of the plurality of points on the second projection surface 16b with respect to the second reference point, for the second projector 10b to project the second pre-warped image onto the second projection surface 16b to form a second corrected projection image that is free of distortion. In some embodiments, the first projection 10a and the second projector 10b may project the first pre-warped image and the second pre-warped image onto the first corrected projection region and the second corrected projection region respectively to perform an image blending process, so as to project the first pre-warped image and the second pre-warped image onto the uneven projection surface 16 to display a rectangular and distortion-free corrected projection image. The image blending process may be a gradient blending process.
While the embodiment uses two projectors for projection, the projection system S10 can also use more than two projectors to co-project on the projection surface 16 in a similar manner to produce a rectangular and distortion-free corrected projection image.
Step S1102: the first inertia measurement unit 104a performs a three-axis acceleration measurement to generate the orientation of the first projector 10a, and the second inertia measurement unit 104b performs a three-axis acceleration measurement to generate the orientation of the second projector 10b;
Step S1104: the first depth sensor 102a detects the plurality of coordinates of the plurality of points on the first projection surface 16a with respect to the first reference point, and the second depth sensor 102b detects the plurality of coordinates of the plurality of points on the second projection surface 16b with respect to the second reference point;
Step S1106: the processor 14 performs the keystone correction according to at least the plurality of coordinates of the plurality of points on the first projection surface 16a with respect to the first reference point and the plurality of coordinates of the plurality of points on the second projection surface 16b with respect to the second reference point to generate the first calibrated projection region and the second calibrated projection region;
Step S1108: the processor 14 generates the first set of image data according to at least the orientation of the first projector 10a, the first calibrated projection region and the plurality of coordinates of the plurality of points on the first projection surface 16a with respect to the first reference point, and generates the second set of image data according to at least the orientation of the second projector 10b, the second calibrated projection region and the plurality of coordinates of the plurality of points on the second projection surface 16b with respect to the second reference point;
Step S1110: the first projector 10a projects the first pre-warped image on the first projection surface 16a according to the first set of image data, and the second projector 10b projects the second pre-warped image on the second projection surface 16b according to the second set of image data.
The description of step S1102 to step S1110 may be found in the previous paragraph, and will not be repeated here. The projection method 1100 is suitable for a multi-projection system S10. The projection method 1100 employs corresponding inertia measurement units fixed at the respective multiple projectors to correct the distortions resulting from tilts of the multiple projectors, employs the corresponding depth sensors of the cameras to detect the configurations of the corresponding projection surfaces to perform the keystone correction, so as to correct the distortion due to the configurations of corresponding projection surfaces, and then pre-warps the images to project the corresponding pre-warped images that are rectangular and distortion-free projection images onto the corresponding projection surfaces.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
109116519 | May 2020 | TW | national |