1. Technical Field
Embodiments of the present disclosure relate to cameras and methods for adjusting focus of the cameras, and particularly to a camera and an auto-focusing method of the camera.
2. Description of Related Art
It is known in the art, that many camera employs a focusing system which automatically focuses the camera. While current cameras are designed with automatic focusing systems, a user must try to set up the camera within an allowable distance range of the focusing system included in the camera. Also, the user must employ his/her own visual feedback on different images of a scene in order to achieve an optimal image of the scene. These automatic focusing systems have been met with a modicum of success, but still they do not provide feedback to the user to allow the user to properly focus the camera automatically. Therefore, there is room for improvement within the prior art.
The present disclosure, including the accompanying drawings, is illustrated by way of examples and not by way of limitation. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.”
In the present disclosure, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a program language. In one embodiment, the program language may be Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as in an EPROM. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of non-transitory computer-readable medium or other storage system. Some non-limiting examples of a non-transitory computer-readable medium include CDs, DVDs, flash memory, and hard disk drives.
The depth-sensing lens 11 is a time of flight (TOF) camera lens having a 3D image capturing functionality for capturing stereoscopic images of a scene, and can sense a depth between an object of the scene and the camera 1. Referring to
The auto-focusing apparatus 12 is a motor that drives the depth-sensing lens 11 to adjust a focusing position of the camera 1 to an optimal position for capturing an image of the scene. The display screen 13 is a light-emitting diode (LED) screen that may be touched with fingers or a stylus, and is used to display the captured image of the scene. In one embodiment, the storage device 14 may be an internal storage system, such as a SD card, a random access memory (RAM) for temporary storage of information, and/or a read only memory (ROM) for permanent storage of information. The microprocessor 15 may be micro controller unit or a chipset that performs various functions of the camera 1.
In one embodiment, the auto-focusing system 10 includes a startup module 101, a focus selecting module 102, a focus calculating module 103, and an auto-focusing module 104. The modules may comprise computerized instructions in the form of one or more programs that are stored in the storage device 14 and executed by the at least one microprocessor 15. A detailed description of each module will be given in the following paragraphs.
In step S21, the startup module 101 activates a function setting interface of the camera 1, and sets a focus mode of the camera 1 through the function setting interface.
In step S22, the focus selecting module 102 drives the auto-focusing apparatus 12 to move the depth-sensing lens 11 of the camera 1 aiming at the scene, and shows images of the scene on the display screen 13 of the camera 1.
In step S23, the focus selecting module 102 determines a focusing position of the depth-sensing lens 11 when the user selects an object of the scene shown as an image on the display screen 13.
In step S24, the focus selecting module 102 senses a depth between the object of the scene and the camera 1 using the depth-sensing lens 11. In one example with respect to
In step S25, the focus calculating module 103 determines a distance between the object of the scene and the focusing position of the depth-sensing lens 11. In one embodiment, the focus calculating module 103 can determine the distance according to the position of the object in the scene and the focusing position of the depth-sensing lens 11.
In step S26, the focus calculating module 103 calculates an optimal focus of the camera 1 according to the depth and the distance. The optimal focus of the camera 1 can make the camera 1 capture an image of the scene.
In step S27, the auto-focusing module 104 drives the auto-focusing apparatus 12 to move the depth-sensing lens 11 from the focusing position of the depth-sensing lens 11 to the optimal focus of the camera 1. In the embodiment, the auto-focusing apparatus 12 automatically adjusts the depth-sensing lens 11 to move from the focusing position of the depth-sensing lens 11 to the optimal focus of the camera 1.
In step S28, the auto-focusing module 104 controls the depth-sensing lens 11 to capture an image of the scene based on the optimal focus of the camera 1 when the user presses a button of the camera 1. The image of the scene is an optimal image of the scene, which may include all fruits such as the apple, the orange and the banana.
Although certain disclosed embodiments of the present disclosure have been specifically described, the present disclosure is not to be construed as being limited thereto. Various changes or modifications may be made to the present disclosure without departing from the scope and spirit of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
101119719 A | Jun 2012 | TW | national |
Number | Name | Date | Kind |
---|---|---|---|
8396252 | El Dokor | Mar 2013 | B2 |
8564693 | Makii | Oct 2013 | B2 |
8599243 | Okada et al. | Dec 2013 | B2 |
20080043218 | Murayama | Feb 2008 | A1 |
20110007135 | Okada et al. | Jan 2011 | A1 |
20110128365 | Ren et al. | Jun 2011 | A1 |
20110134220 | Barbour et al. | Jun 2011 | A1 |
20120098935 | Schmidt et al. | Apr 2012 | A1 |
20120120277 | Tsai | May 2012 | A1 |
20120154537 | Chang et al. | Jun 2012 | A1 |
20120176476 | Schmidt et al. | Jul 2012 | A1 |
20120249740 | Lee et al. | Oct 2012 | A1 |
20120314039 | You et al. | Dec 2012 | A1 |
20130033582 | Sun et al. | Feb 2013 | A1 |
20130147977 | Ren et al. | Jun 2013 | A1 |
20130156296 | El Dokor | Jun 2013 | A1 |
20130201288 | Billerbeck et al. | Aug 2013 | A1 |
20130222369 | Huston et al. | Aug 2013 | A1 |
20130242058 | Bae et al. | Sep 2013 | A1 |
20130286047 | Katano et al. | Oct 2013 | A1 |
Number | Date | Country |
---|---|---|
201203089 | Jan 2012 | TW |
Number | Date | Country | |
---|---|---|---|
20130322863 A1 | Dec 2013 | US |