The subject matter relates to input devices, and more particularly, to a non-contact input device, a non-contact input method, and a display device capable of being controlled by non-contact input.
On many occasions, large display screens are needed that can show files to users. However, the users may be spaced from the display screen and cannot directly touch the display screen. Thus, the users cannot perform touch operations on the display screen and control the display screen to perform corresponding function. Improvements in the art are preferred.
Implementations of the present technology will now be described, by way of example only, with reference to the attached figures.
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.
In general, the word “module,” as used hereinafter, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware. It will be appreciated that modules may comprise connected logic modules, such as gates and flip-flops, and may comprise programmable modules, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of non-transitory computer-readable storage medium or other computer storage device. The term “comprising,” when utilized, means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in the so-described combination, group, series, and the like.
The touch device 20 comprises a transparent touch panel 21, an eye tracker 22, and a distance sensor 23. The eye tracker 22 and the distance sensor 23 are mounted on the touch panel 21. The touch panel 21 comprises an operation surface 210 (shown in
When the user wants to control the display screen 10 to perform a function at a certain input position on the display screen 10, the user can stare at the certain input position and place the touch device 20 in front of the user's eyes. That is, the user can stare at the certain input position through the touch device 20. Then, the user can perform a touch operation at a corresponding touch position on the operation surface 210. The touch position is an intersection point of the touch panel 21 and an imaginary line connecting the user's eyes and the input position. That is, the input position is an intersection point of the display screen 10 and a line extending from an imaginary line connecting the user's eyes and the touch position. The touch operation can be a clicking operation, a double-clicking operation, a sliding operation, a zoom-in operation, a zoom-out operation, or a character input operation.
When the user performs the touch operation on the touch panel 21, the touch panel 21 determines a type of the touch operation and the touch position of the touch operation on the touch panel 21.
When the user performs the touch operation on the touch panel 21, the eye tracker 22 detects the position of the user's eyes (hereinafter, “eye position”). The eye tracker 22 can be a camera. The camera can capture an image of the user, identify the user's face in the captured image, and identify the eye position in the identified user's face. In at least one exemplary embodiment, the camera further comprises a lens 221, an image sensor 222 positioned at an imaging plane of the lens 221, and an image processor (not shown) electrically connected to the image sensor 222. When the user is in front of the camera, the light reflected from the user can travel through the lens 221 and focus on the image sensor 222. Thus, the image is formed on the image sensor 222. The image processor obtains the image from the image sensor 222, identifies the user's face in the obtained image, and identifies the eye position in the identified user's face. The center of the lens 221 is substantially located on the operation surface 210, that is, the center of the lens 221 is substantially coplanar with the operation surface 210.
The distance sensor 23 detects a first distance between the user's eyes and the touch panel 21 (that is, a distance between the user and the touch panel 21), and detects a second distance between the display screen 10 and the touch panel 21. In at least one exemplary embodiment, the distance sensor 23 can be an infrared sensor.
Referring to
At block 31, the obtaining module 311 obtains the eye position, the first distance, the second distance, the type of the touch operation, and the touch position of the touch operation on the touch panel 21.
At block 32, the input position determining module 312 determines a corresponding input position on the display screen 10 according to the obtained eye position, the obtained first distance, the obtained second distance, and the obtained touch position. In at least one exemplary embodiment, the input position determining module 312 establishes a three-dimensional (3D) coordinate system X-Y-Z by setting the eye position as an origin, determines a coordinate of the touch position in the 3D coordinate system X-Y-Z, and determines a coordinate of the input position in the 3D coordinate system X-Y-Z according to the obtained first distance, the obtained second distance, and the determined coordinate of the touch position.
Referring to
Wherein dy represents a Y-axis component of a distance between the user's eyes in the image formed in the image sensor 222 and a center of the image. dy represents a Z-axis component of the distance between the user's eyes in the image formed in the image sensor 222 and the center of the image. f represents a focal length of the lens 221 (usually set by the manufacturer). θx represents an angle between X-axis and an imaginary line connecting the user's eyes and the lens 221, and θz represents an angle between Z-axis and an imaginary line connecting the user's eyes and the lens 221.
Furthermore, the coordinate of the touch position T on the operation surface 210 is defined as (x1, y1, z1) with respect to the lens 221. The coordinate of the touch position T in the 3D coordinate system X-Y-Z is defined as (x1′, y1′, z1′). Then the coordinate (x1′, y1′, z1′) of the touch position T can be calculated as follows:
Referring to
In other exemplary embodiment, the input position determining module 312 can also establish the 3D coordinate system X-Y-Z by setting the position of the eye tracker 22 as the origin. In this case, the calculation of the coordinate of the touch position and the coordinate of the input position is similar to the calculation described above.
At block 33, the command generating module 313 generates a touch command corresponding to the obtained type of the touch operation.
At block 34, the transmitting control module 314 transmits the generated touch command and the determined input position to the display screen 10, thereby controlling the display screen 10 to perform the touch operation at the determined input position. In at least one exemplary embodiment, the transmitting control module 314 can transmit the generated touch command and the determined input position to the display screen 10 in a wireless manner (for example, BLUETOOTH or WI-FI).
For example, when the user wants to double-click an icon on the display screen 10 and run a corresponding application program, the user can first find the icon on the display screen 10 by staring through the touch panel 21, and double-click a corresponding touch position on the touch panel 21 along the user's sight. The touch input position is an intersection point of the touch panel 21 and an imaginary line connecting the user's eyes and the icon. Then, the device 30 can control the display screen 10 to run a corresponding application program of the icon.
With the above configuration, the user can first find the desired input position on the display screen 10 through the touch panel 20, and perform a touch operation at the corresponding touch position on the touch panel 21 along the user's sight. The device 30 can synchronize the touch operation to the input position on the display screen 10 and control the display screen 10 to perform the touch operation at the input position.
Even though information and advantages of the present embodiments have been set forth in the foregoing description, together with details of the structures and functions of the present embodiments, the disclosure is illustrative only. Changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the principles of the present embodiments to the full extent indicated by the plain meaning of the terms in which the appended claims are expressed.
Number | Date | Country | Kind |
---|---|---|---|
201710619146.8 | Jul 2017 | CN | national |