This application claims the priority benefit of Taiwan application serial no. 112126884, filed on Jul. 19, 2023. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
The disclosure relates to a camera device, and in particular to a stereoscopic camera device.
The principle of stereoscopic vision experienced by the human eyes is that there is a spacing between the left and right eyes, so the left and right eyes see different images. Based on this principle, the stereoscopic camera device on the market uses two cameras to capture objects with optical axes parallel to each other, and then synthesize the captured images into a stereoscopic image.
However, the stereoscopic image captured by the stereoscopic camera device currently on the market have the following issues. When the stereoscopic image is presented (such as through a 3D display), the vergence of both eyes does not match the accommodation of both eyes, that is, there is visual vergence-accommodation conflict (VAC), thus causing visual fatigue and physical discomfort. In addition, the capturing manner of the stereoscopic camera device cannot display the stereoscopic image behind the screen and is not suitable for close-range capturing.
The disclosure provides a stereoscopic camera device, which can effectively reduce visual vergence-accommodation conflict of a stereoscopic image and provide optimal stereoscopic visual effects.
An embodiment of the disclosure provides a stereoscopic camera device, which includes a first camera, a second camera, and a controller. The first camera is configured to capture toward an object to obtain a first image of the object. The second camera is configured to capture toward the object to obtain a second image of the object. The controller is electrically connected to the first camera and the second camera. The controller synthesizes the first image and the second image to generate a stereoscopic image of the object. By the controller, the first camera and the second camera are controlled, so that toe-in angles of an optical axis of the first camera and an optical axis of the second camera are greater than 0. Alternatively, the controller image processes the first image and the second image, so that the processed first image and the processed second image are equivalent to images captured when the toe-in angles of the optical axis of the first camera and the optical axis of the second camera are greater than 0.
Based on the above, the first camera and the second camera in the stereoscopic camera device can be controlled, so that the toe-in angles of the optical axis of the first camera and the optical axis of the second camera are greater than 0. Alternatively, the controller image processes the first image and the second image, so that the processed first image and the processed second image are equivalent to the images captured when the toe-in angles of the optical axis of the first camera and the optical axis of the second camera are greater than 0. Therefore, the stereoscopic camera device can present the stereoscopic image in front of or behind a 3D screen, and the stereoscopic camera device can be suitable for close-range capturing.
In the embodiment, the first camera 100 and the second camera 200 may be complementary metal-oxide semiconductor (CMOS) cameras or charge coupled device (CCD) cameras, but the disclosure is not limited thereto. The first camera 100 is configured to capture toward an object O to obtain a first image I1 of the object O. The second camera 200 is configured to capture toward the object O to obtain a second image I2 of the object O.
In the embodiment, the controller 300 includes, for example, a microcontroller unit (MCU), a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a programmable controller, a programmable logic device (PLD), other similar devices, or a combination of the devices, but the disclosure is not limited thereto. In addition, in an embodiment, each function of the controller 300 may be implemented as multiple program codes. The program codes are stored in a memory and the controller 300 executes the program codes. Alternatively, in an embodiment, each function of the controller 300 may be implemented as one or more circuits. The disclosure is not limited to using software or hardware to implement each function of the controller 300.
In the embodiment, the controller 300 is electrically connected to the first camera 100 and the second camera 200. The controller 300 synthesizes the first image Il and the second image 12 to generate a stereoscopic image of the object O. By the controller 300, the first camera 100 and the second camera 200 are controlled, so that toe-in angles θ1 and θ2 of an optical axis A1 of the first camera 100 and an optical axis A2 of the second camera 200 are greater than 0.
In a preferred embodiment, a difference value between included angles α1 and α1′
formed by first straight lines L1 and L1′ and a second straight line L2 and an included angle α2formed by a third straight line L3 and the second straight line L2 falls between ±0.5 degrees, wherein straight lines formed between the imaging positions D1 and Dl' of the stereoscopic image SI of the object O and a position of a preset left eye LE or a preset right eye are the first straight lines L1 and L1′, a straight line formed perpendicular to a connection direction between the position of the preset left eye LE and the position of the preset right eye and toward a direction of the stereoscopic image SI is the second straight line L2, and a straight line formed between a center of sight CT of the preset left eye LE and the preset right eye on the display surface DP and the position of the preset left eye LE or the preset right eye is the third straight line L3.
In detail, the first camera 100 and the second camera 200 respectively capture toward the object O at positions C1 and C2, and the intersection position (that is, a focus position) P2 of the optical axis A1 of the first camera 100 and the optical axis A2 of the second camera 200 defines the display surface DP, as shown in
In addition, when capturing, if the object O is located between the display surface DP and the first camera 100 or the second camera 200, the stereoscopic image SI is located between the display surface DP and the viewer. On the contrary, if the display surface DP is located between the object O and the first camera 100 or the second camera 200, the display surface DP is located between the stereoscopic image SI and the viewer. For example, when the display surface DP is located between the object O and the first camera 100 or the second camera 200, the imaging position of the stereoscopic image SI is at D1; and when the object O is located between the display surface DP and the first camera 100 or the second camera 200, the imaging position of the stereoscopic image SI is at D1′.
Based on the above, in the embodiment, since the generated image is required to have a 3D effect visually, the condition that the distances d and d′ between the imaging positions D1 and D1′ of the stereoscopic image SI of the object O and the display surface DP are greater than 0 must be satisfied or the condition that the distance D between the position P1 of the object O and the display surface DP is greater than 0 must be satisfied. However, in a preferred embodiment, when the difference value between the included angles α1 and α1′ and the included angle α2 falling between ±0.5 degrees is satisfied, the stereoscopic camera device 10 can further effectively reduce visual vergence-accommodation conflict of the stereoscopic image SI and provide optimal stereoscopic effects.
For example, Table 1 above shows values of the stereoscopic image SI respectively at the imaging position D1′ (the difference value between the included angle α1′ and the included angle α2 is ±0.5 degrees) and the imaging position D1 (the difference value between the included angle α1 and the included angle α2 is −0.5 degrees) under different viewing distances (that is, distances between the human eyes and the display surface DP) VD, wherein the unit of each value is mm.
where (xt-xt′) is a parallax between the first image I1 and the second image I2, xt is a distance between the first image I1 at an imaging point IP1 of the first camera 100 and the optical axis A1 of the first camera 100, xt′ is a distance between the second image 12 at an imaging point IP2 of the second camera 200 and the optical axis A2 of the second camera 200, f is a focal length of the first camera 100 and the second camera 200, B′ is the length of the base line B, Z is a shortest distance between the object O and the base line B, and Zcon is the shortest distance between the display surface DP and the base line B.
In the embodiment, the controller 300 further determines a capturable range of the object O according to the shortest distance Zcon between the display surface DP and the base line B, the preset viewing distance VD, and the base line function.
For example, in
In the embodiment, the first actuator 400 is connected to the first camera 100 and is
electrically connected to the controller 300. The first actuator 400 is configured to change the angle of the optical axis Al of the first camera 100 and the position Cl of the first camera 100. The second actuator 500 is connected to the second camera 200 and is electrically connected to the controller 300. The second actuator 500 is configured to change the angle of the optical axis A2 of the second camera 200 and the position C2 of the second camera 200.
In the embodiment, the controller 300 controls the first actuator 400 and the second actuator 500 according to the capturing object distance (that is, the shortest distance Z), the preset viewing distance VD, and a preset interpupillary distance (for example, a distance of a straight line IPD formed between the left eye LE and the right eye RE shown in
Based on the above, in an embodiment of the disclosure, since the stereoscopic camera device 10A is provided with the first actuator 400 and the second actuator 500, the stereoscopic camera device 10A may change the angle of the optical axis A1 of the first camera 100, the position C1 of the first camera 100, the angle of the optical axis A2 of the second camera 200, and the position C2 of the second camera 200 by the first actuator 400 and the second actuator 500. Therefore, the stereoscopic camera device 10A may change a spacing between and the angles of the first camera 100 and the second camera 200 in coordination with the capturing distance. The remaining advantages of the stereoscopic camera device 10A are similar to those of the stereoscopic camera device 10 and will not be described again here.
In the embodiment, the distance sensor 600 is electrically connected to the controller 300. The distance sensor 600 is configured to sense the capturing object distance (that is, the shortest distance Z) between the base line B and the object O.
Based on the above, in an embodiment of the disclosure, since the stereoscopic camera device 10B is provided with the distance sensor 600, the stereoscopic camera device 10B may sense the capturing object distance between the base line B and the object O by the distance sensor 600. Therefore, the stereoscopic camera device 10B may change the toe-in angle θ1 of the first camera 100, the toe-in angle θ2 of the second camera 200, or the distance between the first camera 100 and the second camera 200 according to the capturing object distance, the preset viewing distance VD, and the preset interpupillary distance. The remaining advantages of the stereoscopic camera device 10B are similar to those of the stereoscopic camera device 10A and will not be described again here.
9. In an embodiment, as the size of the display for presenting the stereoscopic image SI becomes larger, the stereoscopic camera devices 10, 10A, and 10B are suitable for increasing the capturing distance (that is, the shortest distance Zcon). For example, a region R1 is for viewing the stereoscopic image SI using a small-size screen (for example, a mobile phone, a tablet, etc.) and is suitable for setting the capturing distance to 300±100 mm. A region R2 is for viewing the stereoscopic image SI using, for example, a 15.6-inch screen (for example, a screen of a laptop) and is suitable for setting the capturing distance to 600±200 mm. A region R3 is for viewing the stereoscopic image SI using, for example, a large-size screen such as a television and is suitable for setting the capturing distance to 900±300 mm.
first image and a processed second image according to an embodiment of the disclosure.
In the embodiment, an optical axis A1′ of the first camera 100 and an optical axis A2′ of the second camera 200 are parallel to each other and perpendicular to the display surface DP.
In the embodiment, the processed first image PI1 is an image obtained after cropping a part of the first image I1 away from a camera system center ST, and the processed second image PI2 is an image obtained after cropping a part of the second image 12 away from the camera system center ST. The camera system center ST is a center of the straight line formed between the first camera 100 and the second camera 200. For example, in
In the embodiment, the controller 300 executes the following conversion on the first image I1 and the second image 200 to generate the processed first image PI1 and the processed second image PI2:
where xt is the distance between the first image I1 at the imaging point IP1 of the first camera 100 and the optical axis A1 of the first camera 100 when the toe-in angle θ1 of the optical axis A1 of the first camera 100 is greater than 0, xt′ is the distance between the second image I2 at the imaging point IP2 of the second camera 200 and the optical axis A2 of the second camera 200 when the toe-in angle θ2 of the optical axis A2 of the second camera 200 is greater than 0, xs is a distance between the first image I1 at an imaging point IP1′ of the first camera 100 and the optical axis A1′ of the first camera 100 when the toe-in angle of the optical axis A1′ of the first camera 100 is equal to 0, and xs′ is a distance between the second image I2 at an imaging point IP2′ of the second camera 200 and the optical axis A2′ of the second camera 200 when the toe-in angle of the optical axis A2′ of the second camera 200 is equal to 0.
In other words, as shown in
In summary, in an embodiment of the disclosure, the stereoscopic camera device includes the first camera, the second camera, and the controller. The first camera and the second camera may be controlled, so that the toe-in angles of the optical axis of the first camera and the optical axis of the second camera are greater than 0. Alternatively, the controller image processes the first image and the second image, so that the processed first image and the processed second image are equivalent to the images captured when the toe-in angles of the optical axis of the first camera and the optical axis of the second camera are greater than 0. Therefore, the stereoscopic camera device can present the stereoscopic image in front of or behind the 3D screen, and the stereoscopic camera device can be suitable for close-range capturing.
In addition, in the embodiment of the disclosure, the capturing manner of the stereoscopic camera device is designed such that the difference value between the included angle formed by the first straight line and the second straight line and the included angle formed by the third straight line and the second straight line falls between ±0.5 degrees, which can effectively reduce visual vergence-accommodation conflict of the stereoscopic image. Therefore, the stereoscopic camera device can provide optimal stereoscopic visual effects.
Number | Date | Country | Kind |
---|---|---|---|
112126884 | Jul 2023 | TW | national |