The present invention relates to an optical touch system and method, and more particularly to an optical touch technology for controlling a plurality of light sources to emit light at different time points, and computing spatial coordinates of specific objects that are shown in captured images at the different time points.
With the constant progress of image display technology, the touch screen now advantageously allows a user to input data simply by touching the screen directly, and accordingly, becomes a widely available display device in the market. The touch screen is also widely applied to various kinds of electronic products, such as point of sale terminals, tourist guide systems, auto teller machines, and various industrial control systems. However, a touch screen employing optical lens detection technique would require a relatively large space to satisfy the requirement for screen touch detection because of the relatively large size of the conventional control mechanism thereof. With the conventional touch screen based on touch detection via optical lens, at least two image capturing modules are mounted on the display panel, and a plurality of infrared light sources is mounted on an outer periphery of the display panel, so that the image capturing modules capture infrared images above the surface of the display panel. When an object touches the surface of the display panel, the object would block the optical path via which the infrared rays emitted from the infrared light sources are projected to an optical reflector around the display panel. As a result, dark areas are produced on the optical reflector and captured by the image capturing modules. Then, by way of triangulation algorithm, virtual rays projected from the image acquiring modules to the touching object may be simulated. And, an intersection of the virtual rays indicates the position being touched by the object. Therefore, coordinates of the touch position may be computed in the above described manner.
However, when the surface of the display panel is touched by a plurality of objects, a plurality of virtual rays may be simulated from each of the image capturing modules. Under this condition, the number of intersections of the virtual rays is larger than the number of the objects, and it will be difficult to accurately determine the exact touch positions to often result in wrong touch position determination. For example, when there are two touch positions on the display panel, total four dark areas produced on the optical reflector may be simulated from two rays projected from each of two image capturing modifies to the optical reflector. Therefore, there would be four ray intersections, and accordingly, two possible sets of touch positions. In other words, the possibility of wrong touch position determination is 50% under this situation. With the popularization of touch devices, the demands for multi-touch screens also increase rapidly. It is therefore desirable to work out a way to increase the accuracy of touch position determination for multi-touch input.
In view of the aforementioned problems of the prior art, a first object of the present invention is to provide an optical touch system that enables increased accuracy of touch position determination for multi-touch input.
A second object of the present invention is to provide an optical touch method, with which touch positions on a multi-touch screen may be more accurately determined.
To achieve the first object, the optical touch system according to a preferred embodiment of the present invention comprises at least one optical reflector, a plurality of image capturing modules, a plurality of first light-emitting modules, a second light-emitting module, a control module, and a data computing module. The at least one optical reflector, the image acquiring modules, the first light-emitting modules, and the second light-emitting module are disposed on an outer periphery of a coordinate detection zone. The image capturing modules capture images formed by at least one object on the coordinate detection zone. The control module controls the first light-emitting modules to emit light at a first time point, and controls the image capturing modules to capture a first image at the first time point respectively. The control module also controls the second light-emitting module to emit light at a second time point, and controls at least one of the image capturing modules to capture at least one second image at the second time point. The data computing module computes a coordinate value of the at least one object on the coordinate detection zone according to positions of the at least one object in each of the first images and the at least one second image.
Preferably, the coordinate detection zone is a surface of a display screen.
When the number of the at least one object is greater than one, the data computing module computes a plurality of object coordinates sets based on positions of the plurality of objects in each of the plurality of first images, and then determines one of the plurality of object coordinates sets as the coordinates of the plurality of objects based on positions of the plurality of objects in the at least one second image.
Preferably, the data computing module computes a plurality of rays emitted from the plurality of image capturing modules based on the positions of the plurality of objects in each of the plurality of first images and positions of the plurality of image capturing modules on the coordinate detection zone, and the data computing module also computes coordinates of intersections among the plurality of rays and groups the coordinates of the intersections into the plurality of object coordinates candidate sets.
Preferably, the control module controls the second light-emitting module to turn off while controlling the plurality of first light-emitting modules to emit light at the first time point. The control module also controls the plurality of first light-emitting modules to turn off while controlling the second light-emitting module to emit light at the second time point.
To achieve the second object, the optical touch method according to the present invention comprises the following steps: (1) providing at least one optical reflector, a plurality of image capturing modules, a plurality of first light-emitting modules, and a second light-emitting module on an outer periphery of a coordinate detection zone; and using the plurality of image capturing modules to capture images formed by at least one object on the coordinate detection zone; (2) controlling the plurality of first light-emitting modules to emit light at a first time point, and controlling the plurality of image capturing modules to capture a first image at the first time point respectively; (3) controlling the second light-emitting module to emit light at a second time point, and controlling at least one of the plurality of image capturing modules to capture at least one second image at the second time point; and (4) computing coordinates of the at least one object on the coordinate detection zone based on positions of the at least one object in each of the plurality of first images and the at least one second image.
Preferably, the coordinate detection zone is a surface of a display screen.
When the number of the at least one object is greater than one, the optical touch method further comprises the following steps: computing a plurality of object coordinates sets based on positions of the plurality of objects in each of the plurality of first images, and then determining one of the plurality of object coordinates sets as the coordinates of the plurality of objects based on positions of the plurality of objects in the at least one second image.
Preferably, the optical touch method further comprises the following steps: computing a plurality of rays emitted from the plurality of image capturing modules based on the positions of the plurality of objects in each of the plurality of first images and positions of the plurality of image capturing modules on the coordinate detection zone; computing coordinates of intersections among the plurality of rays; and grouping the coordinates of the intersections into the plurality of object coordinates candidate sets.
Preferably, the optical touch method further comprises the step of controlling the second light-emitting module to turn off while controlling the plurality of first light-emitting modules to emit light at the first time point; and controlling the plurality of first light-emitting modules to turn off while controlling the second light-emitting module to emit light at the second time point.
With the above arrangements, the optical touch system and method according to the present invention provide the advantage of increased accuracy of touch position determination for multi-touch input.
The structure and the technical means adopted by the present invention to achieve the above and other objects can be best understood by referring to the following detailed description of the preferred embodiments and the accompanying drawings, wherein
Please refer to
The first light-emitting modules 161, 162 are controlled by the control module 10 to emit light at a first time point. At this point, the image capturing modules 12, 14 are able to capture images that have different brightness from the background. When the surface of the display screen 21 is touched by an object, the object would block optical paths of lights emitted from the first light-emitting modules 161, 162 to the optical reflectors 181˜183, and the image capturing modules 12, 14 would capture dark areas formed by the object in the first images 13, 15. For example, when the display screen 21 is touched by two objects so that two touch positions 191, 192 are produced on the display screen 21, there are two dark areas shown in the content of each of the first images 13, 15. However, it is noted that, when the coordinate detection zone is touched by a plurality of objects, there are chances the number of the dark areas in the images captured by the image capturing modules is smaller than the number of the objects touching the coordinate detection zone due to the angle of capturing image, such as in the case the plurality of objects and the image capturing modules are collinear.
The data computing module 11 further includes an image processing unit 111, a set of image capturing module parameters 112, a coordinate computing unit 113, and a determining unit 114. When the first image 13, 15 are received by the data computing module 11, the image processing unit 111 first analyzes the positions of the dark areas in the first images 13, 15. The image capturing module parameters 112 include at least a view angle parameter of the image capturing modules 12, 14, and the position and resolution of the image capturing modules on the coordinate detection zone. Since the first images 13, 15 are optical signals which are able to be received by the image capturing module 12, 14 respectively within a view angle 121; it is able to derive the positions of the objects blocking the optical paths relative to the image capturing modules 12, 14 based on the positions of the dark areas in the first images 13, 15. Since the dark areas are formed when the optical paths are blocked by the objects, the coordinate computing unit 113 is able to compute the rays projected to the objects blocking the optical paths based on the positions of the dark areas and the image capturing module parameters 112. That is, the coordinate computing unit 113 is able to determine on which rays the objects are located. Since the same one object will produce a dark area on each of the first images captured by different image capturing modules 12, 14, the coordinate computing unit 113 may obtain the coordinates of the object by calculating an intersection of the rays projected from the image capturing modules 12, 14 to the object. Since the above-described way of computing the coordinates of the object is the triangulations algorithm known by one having ordinary skill in the art, it is not described in details herein.
When the display screen 21 is touched by a plurality of objects at the same time, such as two objects as shown in
The determining unit 114 of the data computing module 11 is able to determine which of the two object coordinates sets is to be selected according to the positions of the dark areas in any one of the second image 23, 25. Please refer to
While the optical touch system of the present invention employs the architecture of a conventional dual-image acquiring module, it further adds the second light-emitting module 162 and the determining unit 114 of the data computing module 11 to the conventional architecture, so that the optical touch system of the present invention provides increased accuracy of touch position determination for multi-touch devices.
Please refer to
In a second step 42, the first light-emitting modules are controlled to emit light at a first time point, and the image capturing modules are controlled to capture a first image respectively at the first time point. Meanwhile, the second light-emitting module is controlled to stop emitting light at the first time point, so as to avoid any adverse influence on the number and the positions of dark areas formed in the first images. In a third step 43, the second light-emitting module is controlled to emit light at a second time point and at least one of the image capturing modules is controlled to capture at least one second image at the second time point. Meanwhile, the first light-emitting modules are controlled to stop emitting light at the second time point, so as to avoid any adverse influence on the number and the positions of dark areas formed in the at least one second image.
In a fourth step 44, at least one object coordinates candidate set is computed based on the positions of dark areas formed by the at least one object in the first images. In the present invention, the positions of the dark areas formed by the at least one object in the first images are obtained through an image processing conducted on the first images. In a fifth step 45, it is determined whether there is more than one object coordinates candidate set. If yes, a sixth step 46 is executed; or if no, a seventh step 47 is executed. In the seventh step 47, the object coordinates candidate set is output as the coordinates of the object.
When there is more than one object coordinates candidate set, in the sixth step 46, according to the positions of the dark areas formed by the objects in the at least one second images, one of the multiple object coordinates candidate sets is determined as the coordinates of the objects. And then, the determined coordinates of the objects are output. It is noted the first time point and the second time point mentioned in the second step 42 and the third step 43 are intended only to explain that the first images and the at least one second image are captured at different time points, and not to limit the sequential order of acquiring the first and the second images. In other words, the step 43 can be otherwise executed first and the step 42 may be executed after the step 43.
The present invention has been described with some preferred embodiments thereof and it is understood that many changes and modifications in the described embodiments can be carried out without departing from the scope and the spirit of the invention that is intended to be limited only by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
098132362 | Sep 2009 | TW | national |
098139011 | Nov 2009 | TW | national |