1. Technical Field
The present disclosure relates to apparatuses and methods for detecting movements of objects, and particularly, to an apparatus and a method for detecting spatial (3-D) movement as opposed to linear (1-D) or planar (2-D) movement of an object.
2. Description of Related Art
Object movement detection is proposed to be applied in a variety of fields, such as operations of three dimensional (3-D) games and image captures. With the help of object movement detection, real-time operations of 3-D games and image capture can be achieved.
A method used for detecting movement of an object, includes steps of sending infrared rays to the object, receiving the infrared rays reflected by the object, and obtaining the movement of the object based on elapsed time between sending and receiving of the infrared rays and the quantities of the infrared rays received.
However, with the above-described method, only movement along one axis or two axes, i.e., only linear or planar movement may be detected.
What is needed, therefore, is an apparatus and a method for detecting spatial movement of an object, which can overcome the above shortcomings.
Many aspects of the present apparatus and method can be better understood with reference to the following drawings. The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present apparatus and method. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
Embodiments of the present apparatus and method will now be described in detail below and with reference to the drawings.
Referring to
The first camera module 110 includes a first image sensor 111, a first lens 112 and a first filter 113 disposed between the first image sensor 111 and the first lens 112. The second camera module 120 includes a second image sensor 121, a second lens 122 and a second filter 123 disposed between the second image sensor 121 and the second lens 122. The first lens 112 defines an optical center O1, and an optical axis L1. The second lens 122 defines an optical center O1, and an optical axis L1. The first filter 113 and the second filter 123 are configured for substantially blocking transmission of light having a wavelength different from that of the light emitted from the light emitting diode 130, and thus only allow the light emitted from the light source 130 to enter therein. The first image sensor 111 and the second image sensor 121 can be CCD or CMOS. In the present embodiment, the first image sensor 111 and the second image sensor 121 are disposed at a same level, and the first and second image sensors 111, 121 are spaced from the respective first and second lenses 112, 122a same distance f. The “f” can be the focal length of each of the first and second lenses 112, 122.
The light source 130 is located between the first camera module 110 and the second camera module 120, and the first camera module 110 and the second camera module 120 each are spaced from the light source 130a same distance d. The light source 130 includes a rotatable laser diode 131, a reflecting mirror 132 facing the laser diode 131, and a light adjusting lens 133 disposed at an opposite side of the laser diode 131 relative to the reflecting mirror 132. The light adjusting lens 133 can be a converging lens or a collimating lens. The reflecting mirror 132 has a curved reflecting surface facing the laser diode 131. The laser diode 131 is disposed between the reflecting mirror 132 and the light adjusting lens 133. The laser diode 131 can be driven to rotate by a motor (not shown). The laser beam can directly transmit through the light adjusting lens 133 and then project on the object 20, or can be reflected by the reflecting mirror 132 and then transmit through the light adjusting lens 133 and finally project onto the object 20.
The laser beam forms a laser point on the object 20. The rotatable laser diode 131 allows the laser point to scan the object 20, thus enlarging the lighting area on the object 20. The first camera module 110 and the second camera module 120 can capture the laser point at a same time, and then respectively form a first image signal and a second image signal associated with the object 20.
Referring to
Referring to
and
Referring to
and
In this way, the coordinates (a, b, c) can be determined based on u1, u2, v1, v2, f and d, and thus the image processor 140 can obtain the coordinates (a, b, c) of the laser point D.
According to the above-described method, the image processor 140 can continuously determine coordinates of the laser point D. In this way, the spatial movement of the object 20 can be determined.
It is understood that the laser diode 131 can be replaced by other light emitting diodes.
It is understood that the above-described embodiments are intended to illustrate rather than limit the embodiment. Variations may be made to the embodiments and methods without departing from the spirit of the disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the scope of the embodiment.
Number | Date | Country | Kind |
---|---|---|---|
200910303476.1 | Jun 2009 | CN | national |