Field of Invention
The present application relates to a controlling method for an image capturing apparatus. More particularly, the present application relates to a controlling method for preventing user from adopting an inappropriate gesture while capturing images.
Description of Related Art
Stereoscopic image is based on the principle of human vision with two eyes. One conventional way to establish a stereoscopic image is utilizing two cameras separated by a certain gap to capture two images, which correspond to the same object(s) in a scene from slightly different positions/angles. The X-dimensional information and the Y-dimensional information of the objects in the scene can be obtained from one image. For the Z-dimensional information, these two images are transferred to a processor which calculates the Z-dimensional information (i.e., depth information) of the objects to the scene. The depth information is important and necessary for applications such as the three-dimensional (3D) vision, the object recognition, the image processing, the image motion detection, etc.
The digital images captured by one image capture device (e.g., camera) are two-dimensional on one visional angle. In order to obtain the depth information, two images taken from slightly different positions/angles are needed. As mentioned above, two images can be captured by two cameras (multi-views system) separated by a certain gap in a conventional solution. However, this solution involving one extra camera for obtaining the depth information brings extra costs and extra weight.
On the other hand, users can simulate multi-views system by taking serial shots with one single camera. Two (or more) images are sequentially captured while user moving the camera horizontally. These captured images are processed for calculating the depth information. To optimize the effect of stereo-processing, users are preferred to hold the camera with a correct photo-taking gesture. More specifically, users are preferred to rotate the camera along a circular trajectory whose center of circle locates at the user. Unfortunately, users might simply rotate the camera on their palms without moving the camera horizontally. In this case, the camera is simply rotated at a fixed spot without the displacement, and it will lead to imprecise disparity information for following depth computation.
An aspect of the present disclosure is to provide a controlling method, which is suitable for an electronic apparatus comprising a first image-capturing unit and a second image-capturing unit. The controlling method includes steps of: obtaining a plurality of second images by the second image-capturing unit when the first image-capturing unit is operated to capture a plurality of first images for a stereo process; detecting an object in the second images; calculating a relative displacement of the object in the second images; and, determining whether the first images are captured by an inappropriate gesture according to the relative displacement calculated from the second images.
Another aspect of the present disclosure is to provide an electronic apparatus includes a casing, a first image capturing unit, a second image capturing unit and a control module. The first image capturing unit is disposed on a first side of the casing. The second image capturing unit is disposed on a second side of the casing opposite to the first side. The control module is coupled with the first image capturing unit and the second image capturing unit. The second image-capturing unit is enabled by the control module to capture a plurality of second images when the first image-capturing unit is operated to capture a plurality of first images for a stereo process. The second images are utilized to determining whether the first images are captured by an inappropriate gesture.
The disclosure can be more fully understood by reading the following detailed description of the embodiments, with reference made to the accompanying drawings as follows.
In order to obtain depth information of objects, at least two images taken from slightly different positions/angles are needed. A displacement of a near object between two images will be larger than a displacement of a far object between two images, such that a stereo algorithm can establish the depth information according to the difference between the displacements.
Users can stand at the same spot and sequentially capture two images by moving the electronic apparatus for capturing the images for depth recovery. In order to reduce the difficulty of depth estimation, the electronic apparatus for capturing the images should be hold and move in a proper gesture. In some embodiments, this disclosure provides a controlling method for prompting user to move the electronic apparatus in the proper gesture.
Reference is made to
For example, some smart phones include two camera units at the same time. In general, one of them is used for photo-shooting in general, and the other is used for video-chatting (as a webcam), auxiliary shooting or other purposes. In the embodiment, the first image-capturing unit 221 can be a rear camera for photo-shooting in general, and the second image-capturing unit 222 can be a front camera. In some embodiments, the first image-capturing unit 221 and the second image-capturing unit 222 are both built-in cameras of the electronic apparatus 200. In other embodiments, the first image-capturing unit 221 and/or the second image-capturing unit 222 can a stand-alone camera which is attached onto the electronic apparatus 200.
In some embodiments, the electronic apparatus 200 can be a digital camera, a digital camcorder, a video camera, a phone with a built-in camera, a smart phone or any equivalent digital image capturing device.
As shown in
To acquire stereo information of a target, the first image-capturing unit 221 can capture a series of first images related to the target in sequence. The first images can be utilized in a stereo process (e.g., depth computation, stereo content acquisition, establishing three-dimensional model). The first images must be captured from different positions (at different timing while the electronic apparatus 200 is moving) and the stereo process is based on disparity information between the first images.
According the basic pinhole camera model, images captured under a rotation gesture shown in
As shown in
Ideally, the user shall take consecutive images (two or more) along a circular trajectory with a fixed radius. The trajectory shall be approximately a partial circle centered at the user's body, as shown in
On the other hand, when the user capture images with an inappropriate gesture (e.g., rotating the electronic apparatus 200 on their palms at a fixed position) and the electronic apparatus 200 will be rotated along the second pattern PAT2 as shown in
Thus, in order to avoid inappropriate gesture, the controlling method 100 is utilized to detect whether the electronic apparatus 200 is operated with a proper image-capturing gesture (as shown in
When the first image-capturing unit 221 is operated to capture a plurality of first images for the stereo process, the controlling method 100 executes step S103 for obtaining a plurality of second images by the second image-capturing unit 222. For example, every times when the first image-capturing unit 221 is operated to capture one of the first images, the second image-capturing unit 222 is simultaneously triggered to capture one of the second images in the background. In the embodiment, the second images captured by the second image-capturing unit 222 (e.g., the front camera on the electronic apparatus 200) will provide important clues to identify the image-capturing gesture.
Reference is made to
As shown in
Reference is also made to
As shown in
In aforesaid embodiments, two second images captured by the second image-capturing unit 222 are explained for demonstration. However, the disclosure is not limited to capture two first/second images during one stereo process. In other embodiments, two or more first/second images can be captured in sequence in order to perform the stereo process (e.g., depth computation or stereo content acquisition).
Therefore, the relationship between the images captured by the front camera can be utilized to determine whether the user hold and move the electronic apparatus 200 in proper gesture.
As shown in
Afterward, the controlling method 100 executes step S105 for calculating a relative displacement of the object in the second images by the displacement calculation unit 232 of the control module 230.
In the example shown in
In the example shown in
As shown in
In addition, the controlling method 100 is executed to selectively prompt a notification for re-capturing the first images according to the relative displacement when first images are captured by the inappropriate gesture.
When the relative displacement exceeds the threshold value (e.g., the relative displacement is D2 shown in the
In the embodiment, holding and moving the first image-capturing unit 221 along the second pattern PAT2 is regarded as the inappropriate gesture, because this gesture (referring to
When the relative displacement is less than the threshold value (e.g., the relative displacement is D1 shown in the
In the embodiment, holding and moving the first image-capturing unit 221 along the first pattern PAT1 is regarded as the proper gesture, because this gesture (referring to
By calculating the facial features or center of the largest face in the view of the second image-capturing unit 222 (e.g., the front camera), the controlling method 100 and the electronic apparatus 200 can easily identify whether the image-capturing gesture is proper. Accordingly, the controlling method 100 and the electronic apparatus 200 can inform the user to re-capture the first images to optimize the stereo-processing preciseness.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present application without departing from the scope or spirit of the application. In view of the foregoing, it is intended that the present application cover modifications and variations of this application provided they fall within the scope of the following claims.
This application claims the priority benefit of U.S. Provisional Application Ser. No. 61/807,341, filed Apr. 2, 2013, the full disclosures of which are incorporated herein by reference
Number | Name | Date | Kind |
---|---|---|---|
5444791 | Kamada | Aug 1995 | A |
5594469 | Freeman | Jan 1997 | A |
5617312 | Iura | Apr 1997 | A |
5808678 | Sakaegi | Sep 1998 | A |
5991428 | Taniguchi | Nov 1999 | A |
6191773 | Maruno | Feb 2001 | B1 |
6498628 | Iwamura | Dec 2002 | B2 |
6501515 | Iwamura | Dec 2002 | B1 |
7639233 | Marks | Dec 2009 | B2 |
7828659 | Wada | Nov 2010 | B2 |
8035624 | Bell | Oct 2011 | B2 |
8115877 | Blatchley | Feb 2012 | B2 |
8144121 | Kitaura | Mar 2012 | B2 |
8199108 | Bell | Jun 2012 | B2 |
8203601 | Kida | Jun 2012 | B2 |
8228305 | Pryor | Jul 2012 | B2 |
8230367 | Bell | Jul 2012 | B2 |
8253746 | Geisner | Aug 2012 | B2 |
8300042 | Bell | Oct 2012 | B2 |
8340432 | Mathe | Dec 2012 | B2 |
8368819 | Lee | Feb 2013 | B2 |
8379101 | Mathe | Feb 2013 | B2 |
8418085 | Snook | Apr 2013 | B2 |
20050036067 | Ryal | Feb 2005 | A1 |
20070279485 | Ohba | Dec 2007 | A1 |
20080088588 | Kitaura | Apr 2008 | A1 |
20130050427 | Chou | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
201015203 | Apr 2010 | TW |
201310970 | Mar 2013 | TW |
Entry |
---|
Bastian et al “Interactive Modelling for AR Applications” IEEE International Symposium on Mixed and Augmented Reality 2010,Science and Technology Proceedings Oct. 13-16, Seoul, Korea. |
Corresponding Taiwanese Office Action that these art references were cited. |
Number | Date | Country | |
---|---|---|---|
20140293012 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
61807341 | Apr 2013 | US |