1. Field of the Invention
The present invention relates to the technical field of image processing and, more particularly, to an optical touch screen system and method for recognizing relative distance of objects.
2. Description of Related Art
Currently, touch screens (i.e., touch panels) are in widespread use for being directly touched by an object or a finger to perform an operation instead of a mechanical button operation. When a user touches a picture on a screen, a sensing feedback system implemented on the screen can drive the connectors based on the pre-programmed codes, and the screen can present colorful audiovisual effects to completely control a human-machine interface.
There are several types of touch screens available in the market, which are resistive touch screen, capacitive touch screen, acoustic touch screen, and optical touch screen. The optical touch screen uses an optical sensor to receive a reflective light to thereby determine a position of an object entering a touch area.
To overcome this, a method is proposed to use two lighting devices 110 and two optical sensors 130 in two pairs.
When the touching object is a single object, it requires at least two optical sensors 130 for an accurate positioning, and when the touching object contains two objects, it requires three optical sensors 130 for an accurate positioning. Similarly, when the touching object contains three objects, it requires four optical sensors 130, and so on. Namely, the number of reference points increases as the number of objects increases, so the number of optical sensors 130 required also increases.
However, for saving the cost, the typical optical touch input device mostly includes two optical sensors 130, and in this case the accuracy of recognizing two or more objects is relatively reduced.
As the touching object contains two objects, an optical touch input device with two optical sensors 130 can obtain two reflected images at each optical sensor 130, as shown in
The optical touch input device with two optical sensors 130 can obtain the accuracy of 100% for a single touching object, of 50% for two touching objects, of 33.3% for three touching objects, of 25% for four touching objects, and so on. Namely, the accuracy decreases as the number of touching objects increases.
In order to avoid the mistake caused by the ghost points, the conventional technique requires some subsequent processes after the images are obtained. In the subsequent processes, the ghost point recognition method includes: (1) determining a width by using a sensed image to decide the width of light beam generated by a reflective light and using the width ratio to decide the distance of a touching object, which has a disadvantage of easily making a wrong decision when the touching object has a uniform width or an overlarge width error at different angles, for example, the wrong decision occurs when the width error of a finger at different angles is over 20%; (2) brightness level distribution and statistics, i.e., the distance of the object is determined by analyzing the ratio of grey scale to the maximum brightness since the grey scale effect is generated as the touching object reflects a light, wherein the error increases as the object's surface radian increases; and (3) adjusting the optical sensor into an inclined top visual direction such that its image presents a solid effect to thereby decide the distance of the object, but due to the inclination, an interference may be caused by a reflective light source when the light is reflected onto the surface.
Therefore, it is desirable to provide an improved optical touch screen system and method for recognizing relative distance of objects, so as to mitigate and/or obviate the aforementioned problems.
The object of the present invention is to provide an optical touch screen system, which can accurately filter out ghost points and effectively increase the accuracy of multi-point touching.
According to a feature of the invention, an optical touch screen system is provided, which includes a display screen, a first lighting and sensing module, a second lighting and sensing module, and a processor. The display screen displays visual prompts to solicit actions from a user. The first and the second lighting and sensing modules are mounted at two adjacent corners of the display screen for forming a first and a second visual fields above the display screen, respectively, so as to form a touch area on the display screen, wherein the first and the second lighting and sensing modules detect an object entering the touch area and generate a first electrical position signal and a second electrical position signal, respectively. The processor is connected to the first and the second lighting and sensing modules for recognizing a position of the object based on the first electrical position signal and the second electrical position signal to thereby achieve a human-machine control. The first lighting and sensing module has a first lighting device mounted on a first mount location from the display screen to illuminate on a surface of the display screen at an auxiliary angle of a first mount angle. The second lighting and sensing module has a second lighting device mounted on a second mount height from the display screen to illuminate on the surface of the display screen at an auxiliary angle of a second mount angle.
According to another feature of the invention, a method for recognizing a relative distance of an object in an optical touch screen system is provided, which is used in a display screen to recognize a position where a user touches the display screen, wherein the first lighting and sensing module and the second lighting and sensing module are mounted on two adjacent corners of the display screen. The first lighting and sensing module has a first lighting device and a first sensing device. The second lighting and sensing module has a second lighting device and a second sensing device. The first lighting device is mounted at a first mount location from the display screen. The first lighting device has an axis of a lighting plane to form a first mount angle θ1 with respect to the display screen. The second lighting device is mounted at a second mount height from the display screen. The second lighting device has an axis of a lighting plane to form a second mount angle θ2 with respect to the display screen. The method includes the steps of: (A) using the first and the second lighting devices to form a first and a second visual fields above a display screen respectively so as to form a touch area on the display screen by intersecting the first visual field with the second visual field; (B) using the first and the second sensing devices to generate a first and a second electrical position signals for an object entering the touch area; and (C) using a processor to calculate a position of the object based on the first and the second electrical position signals.
According to a further feature of the invention, an optical touch screen system is provided, which includes a display screen, a first lighting and sensing module, a second lighting and sensing module, and a processor. The display screen displays visual prompts to solicit actions from a user. The first and the second lighting and sensing modules are mounted at two adjacent corners of the display screen for forming a first and a second visual fields above the display screen, respectively, so as to form a touch area on the display screen. A first electrical position signal and a second electrical position signal are generated when the first lighting and sensing module detects a first object entering the touch area, and a third electrical position signal and a fourth electrical position signal are generated when the second lighting and sensing module detects a second object entering the touch area. The processor is connected to the first and the second lighting and sensing modules for recognizing positions of the first and the second objects based on the first, the second, the third, and the fourth electrical position signals, so as to achieve a human-machine control. The first lighting and sensing module has a first lighting device mounted at a first mount location from the display screen to illuminate on a surface of the display screen at an auxiliary angle of a first mount angle. The second lighting and sensing module has a second lighting device mounted at a second mount height from the display screen to illuminate on the surface of the display screen at an auxiliary angle of a second mount angle.
Other objects, advantages, and novel features of the invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
The display screen 910 displays visual prompts to users in order to further control the human-machine interface. In this embodiment, the display screen 910 is preferably an LCD. The operation principle of the optical touch screen system 900 according to the invention can be implemented on various screens without any affection. Thus, the display screen 910 can be a CRT, LED, or plasma display screen.
The first lighting and sensing module 920 and the second lighting and sensing module 930 are mounted at two adjacent corners of the display screen 910 to thereby form a first visual field ξ1 and a second visual field ξ2 above the display screen 910, respectively. The first visual field ξ1 and the second visual field ξ2 intersect to form a touch area 950 above the display screen. The first lighting and sensing module 920 and the second lighting and sensing module 930 detect an object 960 entering the touch area 950, and generate a first electrical position signal and a second electrical position signal, respectively.
The first lighting device 921 and the second lighting device 931 are preferably an LED light source, and cover the lighting path with the first mask 923 and the second mask 933, respectively, to thereby generate a directive light. The directive light of the first and the second lighting devices 921 and 931 directly illuminate on a surface of the display screen 910 at an angle of depression (a first mount angle θ1 and a second mount angle θ2), respectively. The first and the second lighting devices 921 and 931 are each an LED.
The first lens 927 and the second lens 937 are coupled to plural rows of sensing units of the first sensing device 925 and the second sensing device 935, respectively, in order to pass a light with a specific wavelength to thereby obtain a reflective light from the object 960. An axis 1010 of the first lens 927 is parallel to the display screen 910, and an axis 1020 of the second lens 937 is parallel to the display screen 910.
The first sensing device 925 and the second sensing device 935 are each preferably a CMOS sensing device. In addition, the first sensing device 925 and the second sensing device 935 can be a CCD sensing device. Each of the first sensing device 925 and the second sensing device 935 has plural rows of sensing units to thereby sense a reflective light from the object 960 and generate a first sensing height H12 and a second sensing height H22, respectively. Since the first sensing device 925 and the second sensing device 935 generate the first sensing height H12 and the second sensing height H22, respectively, the resolution thereof can be 160×16, 160×32, and 640×32.
As shown in
The first mask 923 and the second mask 933 are employed to cover the lighting paths of the lighting devices. The surface of the display screen 910 is directly illuminated by the lighting devices at an angle of depression (a first mount angle θ1 and a second mount angle θ2). When using an object to take a touch and control action, the CMOS sensing devices 925, 935 receive a reflective light. Since the lighting devices illuminate at an angle (a first mount angle θ1 and a second mount angle θ2) on the basis of the display screen, the light is reflected by the touching object, and the CMOS sensing devices 925, 935 receive an image in a beam form. The height of the beam is getting higher as the touching object is getting closer to the CMOS sensing devices 925, 935.
As shown in
where H11 indicates a first mount location, and d indicates the length of the touch area 950. In this embodiment, the length of the touch area 950 is equal to the farthest lighting distance of the first lighting device 921. In other embodiments, the diagonal length of the touch area 950 is used as the farthest lighting distance of the first lighting device 921. In this case, (d)2 in the above equation can be changed into (d)2+(w)2, where w indicates the width of the touch area 950.
Similarly, the second lighting device 931 is mounted at a second mount height H21 from the display screen 910 in order to illuminate the touch area 950. The second lighting device 931 has an axis of a lighting plane to form the second mount angle θ2 with respect to the display screen 910, where 0°≦θ2≦30°. The second mount angle θ2 can be expressed as:
where H21 indicates a second mount height, and d indicates the length of the touch area 950. In other embodiments, (d)2 in the above equation can be changed into (d)2+(w)2, where w indicates the width of the touch area 950.
where H12 indicates a first sensing height, H11 indicates a first mount location of the first lighting device 921, and d1 indicates the distance between the first lighting device 921 and an intersection of the display screen and a light from the first lighting device 921. In this embodiment, d1 is equal to the length of the touch area 950. In other embodiments, d1 can indicate the diagonal length of the touch area 950, H11=d1*tan(θ1), and θ1 indicates the first mount angle.
Similarly, the distance D2 from the touching object 960 to the second lighting and sensing module 930 can be expressed as:
where H22 indicates a second sensing height, H21 indicates a second mount height of the second lighting device 931, d2 indicates the distance between the second lighting device 931 and an intersection of the display screen and a light from the second lighting device 931, H21=d2*tan(θ2), and θ2 indicates the second mount angle.
It is known from
Since the processor 940 receives the distance D1 from the object 960 to the first lighting and sensing module 920 and the distance D2 from the object 960 to the second lighting and sensing module 930, it is able to accurately calculate the position of the object 960 based on the distances D1 and D2. Therefore, the ghost points (C, D) in
For simplifying the design of the first and the second lighting and sensing modules 920 and 930, there is no need to calculate the distances D1 and D2 for the first and the second lighting and sensing modules 920 and 930, respectively. The first and the second lighting and sensing modules 920 and 930 output the first and the second sensing heights H12 and H22 as the first and the second electrical position signals, respectively.
The processor 940 is connected to the first and the second lighting and sensing modules 920 and 930 in order to generate the distances D1 and D2 for the object 960 according to the first and the second electrical position signals H12 and H22, so as to further generate the position of the object 960.
In this embodiment, the first mount location H11 and the second mount height H21 are related to the mounting of the first lighting device 921 and the second lighting device 931. When the first and the second lighting devices 921 and 931 are mounted, the first mount location H11 and the second mount height H21 are determined. Accordingly, the first mount angle θ1 and the second mount angle θ2 are determined, and the distance d1 between the first lighting device 921 and an intersection of the display screen and a light from the first lighting device 921 and the distance d2 between the second lighting device 931 and an intersection of the display screen and a light from the second lighting device 931 are also determined. Therefore, it needs only the first sensing height H12 and the second sensing height H22 to calculate the distance D1 from the touching object 960 to the first lighting and sensing module 920 and the distance D2 from the touching object 960 to the second lighting and sensing module 930.
First, in step (A), the first and the second lighting devices 921 and 931 are used to form a first and a second visual fields ξ1 and ξ2 above a display screen respectively, so as to form a touch area 950 on the display screen.
Next, in step (B), the first and the second sensing devices 925 and 935 are used to generate a first and a second electrical position signals for an object 960 entering the touch area 950.
Finally, in step (C), a processor 940 is used to calculate a position of the object 960 based on the first and the second electrical position signals.
The control flowchart of an optical touch screen method as shown in
For the condition in
where H12 indicates a first sensing height generated by the first lighting and sensing module 920 for the first object, H11 indicates a first mount location of the first lighting device 921, d1 indicates the length of the touch area 950, H11=d1*tan(θ1), and θ1 indicates a first mount angle formed by intersecting an axis of a lighting plane of the first lighting device 921 with the display screen. The distance D12 from the first object to the second lighting and sensing module 930 can be expressed as:
where H22 indicates a second sensing height generated by the second lighting and sensing module 930 for the first object, H21 indicates a second mount height of the second lighting device 931, d1 indicates the length of the touch area 950, H21=d2*tan(θ2), and θ2 indicates a second mount angle formed by intersecting an axis of a lighting plane of the second lighting device 931 with the display screen.
The distance D21 from the second object to the first lighting and sensing module 920 can be expressed as:
where H2
where H2
Existing optical touch techniques use two CMOS sensing devices to collect images for calculation of two touching objects. Each CMOS sensing device can obtain two vector results derived from the reflected light sources. After the combination, the two CMOS sensing devices can have four vector intersections, two of which being real and indicating the accurate coordinates of the objects, and the other two being the ghost points. If the ghost points are incorrectly determined, the hand gesture can be incorrectly determined. Therefore, the invention changes the incident angles of the light sources and masks the undesired light sources such that the reflective images can have the effect of image heights for a subsequent corresponding solid image conversion to thereby exclude the ghost points and obtain the accurate position of a touching object. Thus, the accuracy can be effectively provided, even for multiple touching objects. In addition, no additional hardware, such as the expensive CMOS sensing devices, is required, and the data processing can be implemented in firmware directly.
As compared with the prior art, the invention changes the incident angles of the light sources and masks the undesired light sources such that the reflective images can have the effect of image heights when the light illuminates the objects, and further uses the first and the second sensing devices 935 to extract the solid images with important information for using the position information of the objects to exclude the ghost points in the subsequent processes. Thus, the accuracy can be effectively provided, even for multiple touching objects. In addition, no additional hardware, such as the expensive CMOS sensing devices, is required, and the data processing can be implemented directly in the processor by firmware.
Although the present invention has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the invention as hereinafter claimed.
Number | Date | Country | Kind |
---|---|---|---|
099112910 | Apr 2010 | TW | national |