Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
The present disclosure relates to techniques for detecting and identifying objects on a touch surface.
To an increasing extent, touch-sensitive panels are being used for providing input data to computers, electronic measurement and test equipment, gaming devices, etc. The panel may be provided with a graphical user interface (GUI) for a user to interact with using e.g. a pointer, stylus or one or more fingers. The GUI may be fixed or dynamic. A fixed GUI may e.g. be in the form of printed matter placed over, under or inside the panel. A dynamic GUI can be provided by a display screen integrated with, or placed underneath, the panel or by an image being projected onto the panel by a projector.
There are numerous known techniques for providing touch sensitivity to the panel, e.g. by using cameras to capture light scattered off the point(s) of touch on the panel, by using cameras to directly observe the objects interacting with the panel, by incorporating resistive wire grids, capacitive sensors, strain gauges, etc. into the panel.
In one category of touch-sensitive panels known as ‘above surface optical touch systems’ and known from e.g. U.S. Pat. No. 4,459,476, a plurality of optical emitters and optical receivers are arranged around the periphery of a touch surface to create a grid of intersecting light paths (otherwise known as detection lines) above the touch surface. Each light path extends between a respective emitter/receiver pair. An object that touches the touch surface will block or attenuate some of the light paths. Based on the identity of the receivers detecting a blocked light path, a processor can determine the location of the intercept between the blocked light paths.
For most touch systems, a user may place a finger onto the surface of a touch panel to register a touch. Alternatively, a stylus may be used. A stylus is typically a pen shaped object with at least one end configured to be pressed against the surface of the touch panel. An example of a stylus according to the prior art is shown in
PCT/SE2016/051229 describes an optical IR touch sensing apparatus configured to determine a position of a touching object on the touch surface and an attenuation value corresponding to the attenuation of the light resulting from the object touching the touch surface. Using these values, the apparatus can differentiate between different types of objects, including multiple stylus tips, fingers, palms. The differentiation between the object types may be determined by a function that takes into account how the attenuation of a touching object varies across the touch surface, compensating for e.g. light field height, detection line density, detection line angular density etc.
For larger objects applied to the touch surface, such as palms and board erasers, it is possible to use an attenuation map of the touch surface to determine an approximate shape of the object. For example, where an optical IR touch sensing apparatus is used, an attenuation map may be generated showing an area on the touch surface where the light is highly attenuated. The shape of an attenuated area may then be used to identify the position and shape of the touching object. In a technique known according to the prior art, a rough shape of the large object can be determined by identifying all points with an attenuation above a threshold value. An approximate centroid and orientation of the large object may then be determined using the image moments of the identified points. Such techniques are described in “Image analysis via the general theory of moments” by Michael Reed Teague. Once the centroid and orientation of the large object are determined, width and height of the board eraser can be found by determining the extent of the identified pixels in the direction of the orientation angle and the normal of the orientation angle.
However, for smaller objects, use of attenuation map to determine object characteristics like size, orientation, and shape becomes very difficult due to the low resolution of the attenuation map. In particular a stylus tip may present only a few pixels of interaction on an attenuation map.
Therefore, what is needed is a method of determining object characteristics that overcome the above limitations.
It is an objective of the disclosure to at least partly overcome one or more of the above-identified limitations of the prior art.
One or more of these objectives, as well as further objectives that may appear from the description below, are at least partly achieved by means of a method for data processing, a computer readable medium, devices for data processing, and a touch-sensing apparatus according to the independent claims, embodiments thereof being defined by the dependent claims.
A first embodiment provides a touch sensing apparatus, comprising: a touch surface, a plurality of emitters arranged around the periphery of the touch surface to emit beams of light such that one or more objects touching the touch surface cause an attenuation or occlusion of the light; a plurality of light detectors arranged around the periphery of the touch surface to receive light from the plurality of emitters on a plurality of light paths, wherein each light detector is arranged to receive light from more than one emitter; and a processing element configured to: determine, based on output signals of the light detectors, a transmission value for each light path; process the transmission values to determine an object reference point on the touch surface where the light is attenuated or occluded by an object, determine a region around the object reference point, determine a plurality of light paths intersecting the region, determine a statistical measure for each of at least one light path variables of the plurality of light paths intersecting the region, including at least the transmission values of the light paths, and determine one or more characteristics of the object in dependence on the at least one statistical measure.
A second embodiment provides a method in a touch sensing apparatus, said touch sensing apparatus comprising: a touch surface, a plurality of emitters arranged around the periphery of the touch surface to emit beams of light such that one or more objects touching the touch surface cause an attenuation or occlusion of the light; and a plurality of light detectors arranged around the periphery of the touch surface to receive light from the plurality of emitters on a plurality of light paths, wherein each light detector is arranged to receive light from more than one emitter; said method comprising: determining, based on output signals of the light detectors, a transmission value for each light path; processing the transmission values to determine an object reference point on the touch surface where the light is attenuated or occluded by an object, determining a region around the object reference point, determining a plurality of light paths intersecting the region, determining a statistical measure of values for each of at least one light path variable of the plurality of light paths intersecting the region, including at least the transmission values of the light paths, and determining one or more characteristics of the object in dependence on the at least one statistical measure.
Embodiments of the invention will now be described in more detail with reference to the accompanying schematic drawings.
The present disclosure relates to optical touch panels and the use of techniques for providing touch sensitivity to a display apparatus. Throughout the description the same reference numerals are used to identify corresponding elements.
In addition to having its ordinary meaning, the following terms can also mean:
A “touch object” or “touching object” is a physical object that touches, or is brought in sufficient proximity to, a touch surface so as to be detected by one or more sensors in the touch system. The physical object may be animate or inanimate.
An “interaction” occurs when the touch object affects a parameter measured by the sensor.
A “touch” denotes a point of interaction as seen in the interaction pattern.
A “light field” is the light flowing between an emitter and a corresponding detector. Although an emitter may generate a large amount of light in many directions, only the light measured by a detector from an emitter defines the light field for the emitter and detector.
Light paths 50 may conceptually be represented as “detection lines” that extend across the touch surface 20 to the periphery of touch surface 20 between pairs of emitters 30a and detectors 30b, as shown in
In one embodiment, the light paths are a set of virtual light paths converted from the actual light paths via an interpolation step. Such an interpolation step is described in PCT publication WO2011139213. The virtual light paths may be configured so as to match the requirements of certain CT algorithms, viz. algorithms that are designed for processing efficient and/or memory efficient and/or precise tomographic reconstruction of an interaction field. In this embodiment, any characteristics of the object are determined from a statistical measure of the virtual light paths intersecting the region.
As used herein, the emitters 30a may be any type of device capable of emitting radiation in a desired wavelength range, for example a diode laser, a VCSEL (vertical-cavity surface-emitting laser), an LED (light-emitting diode), an incandescent lamp, a halogen lamp, etc. The emitters 30a may also be formed by the end of an optical fibre. The emitters 30a may generate light in any wavelength range. The following examples presume that the light is generated in the infrared (IR), i.e. at wavelengths above about 750 nm. Analogously, the detectors 30b may be any device capable of converting light (in the same wavelength range) into an electrical signal, such as a photo-detector, a CCD device, a CMOS device, etc.
The detectors 30b collectively provide an output signal, which is received and sampled by a signal processor 140. The output signal contains a number of sub-signals, also denoted “transmission values”, each representing the energy of light received by one of light detectors 30b from one of light emitters 30a. Depending on implementation, the signal processor 140 may need to process the output signal for separation of the individual transmission values. The transmission values represent the received energy, intensity or power of light received by the detectors 30b on the individual detection lines 50. Whenever an object touches a detection line 50, the received energy on this detection line is decreased or “attenuated”. Where an object blocks the entire width of the detection line of an above-surface system, the detection line will be fully attenuated or occluded.
In one embodiment, the touch apparatus is arranged according to
In one embodiment, the top edge of reflector surface 80 is 2 mm above touch surface 20. This results in a light field 90 which is 2 mm deep. A 2 mm deep field is advantageous for this embodiment as it minimizes the distance that the object needs to travel into the light field to reach the touch surface and to maximally attenuate the light. The smaller the distance, the shorter time between the object entering the light field and contacting the surface. This is particularly advantageous for differentiating between large objects entering the light field slowly and small objects entering the light field quickly. A large object entering the light field will initially cause a similar attenuation as a smaller object fully extended into the light field. The shorter distance for the objects to travel, the fewer frames are required before a representative attenuation signal for each object can be observed. This effect is particularly apparent when the light field is between 0.5 mm and 2 mm deep.
In an alternative embodiment shown in
The signal processor 140 may be configured to process the transmission values so as to determine a property of the touching objects, such as a position (e.g. in a x,y coordinate system), a shape, or an area. This determination may involve a straight-forward triangulation based on the attenuated detection lines, e.g. as disclosed in U.S. Pat. No. 7,432,893 and WO2010/015408, or a more advanced processing to recreate a distribution of attenuation values (for simplicity, referred to as an “attenuation pattern”) across the touch surface 20, where each attenuation value represents a local degree of light attenuation. The attenuation pattern may be further processed by the signal processor 140 or by a separate device (not shown) for determination of a position, shape or area of touching objects. The attenuation pattern may be generated e.g. by any available algorithm for image reconstruction based on transmission values, including tomographic reconstruction methods such as Filtered Back Projection, FFT-based algorithms, ART (Algebraic Reconstruction Technique), SART (Simultaneous Algebraic Reconstruction Technique), etc. Alternatively, the attenuation pattern may be generated by adapting one or more basis functions and/or by statistical methods such as Bayesian inversion. Examples of such reconstruction functions designed for use in touch determination are found in WO2009/077962, WO2011/049511, WO2011/139213, WO2012/050510, and WO2013/062471, all of which are incorporated herein by reference.
For the purposes of brevity, the term ‘signal processor’ is used throughout to describe one or more processing components for performing the various stages of processing required between receiving the signal from the detectors through to outputting a determination of touch including touch co-ordinates, touch properties, etc. Although the processing stages of the present disclosure may be carried out on a single processing unit (with a corresponding memory unit), the disclosure is also intended to cover multiple processing units and even remotely located processing units. In an embodiment, the signal processor 140 can include one or more hardware processors 130 and a memory 120. The hardware processors can include, for example, one or more computer processing units. The hardware processor can also include microcontrollers and/or application specific circuitry such as ASICs and FPGAs. The flowcharts and functions discussed herein can be implemented as programming instructions stored, for example, in the memory 120 or a memory of the one or more hardware processors. The programming instructions can be implemented in machine code, C, C++, JAVA, or any other suitable programming languages. The signal processor 130 can execute the programming instructions and accordingly execute the flowcharts and functions discussed herein.
In step 410 of
In step 420, the output signals are processed for determination of the transmission values (or ‘transmission signals’). As described above, the transmission values represent the received energy, intensity or power of light received by the detectors 30b on the individual detection lines 50.
In step 430, the signal processor 140 is configured to process the transmission values to determine the presence of one or more touching objects on the touch surface. In an embodiment, the signal processor 140 is configured to process the transmission values to generate a two-dimensional attenuation map of the attenuation field across the touch surface, i.e. a spatial distribution of attenuation values, in which each touching object typically appears as a region of changed attenuation. From the attenuation map, two-dimensional touch data may be extracted and one or more touch locations may be identified. The transmission values may be processed according to a tomographic reconstruction algorithm to generate the two-dimensional attenuation map of the attenuation field.
In one embodiment, the signal processor 140 may be configured to generate an attenuation map for the entire touch surface. In an alternative embodiment, the signal processor 140 may be configured to generate an attenuation map for a sub-section of the touch surface, the sub-section being selected according to one or more criteria determined during processing of the transmission values.
In an alternative embodiment, the signal processor 140 is configured to process the transmission values to determine the presence of one or more touching objects on the touch surface by determining intersections between attenuated or occluded detection lines, i.e. by triangulation. In yet another embodiment, the signal processor 140 is configured to process the transmission values to determine the presence of one or more touching objects on the touch surface using non-linear touch detection techniques such as those described in US patent application publication 20150130769 or 20150138105.
In step 440, the signal processor 140 is configured to determine an object reference point 250 for each touching object 210, 220. As shown in
In one embodiment, an image moment is applied to the attenuation map, or to a sub-region of the attenuation map, to determine a centroid of a detected touching object, for use as the object reference point. E.g. For a scalar attenuation map with pixel intensities I(x,y), raw image moments Mij are calculated by:
The centroid of the image moment may be calculated as:
{°x,°y°}={M10/M00,°M01/M01°}
The object reference point 250 is then set to the co-ordinates of the centroid of the image moment.
In another embodiment, signal processor 140 is configured to determine an object reference point 250 within the interaction area of the touching object by determining a local maxima (i.e. point of highest attenuation) in the area of the attenuation map covered by the object. The identified maxima may be further processed for determination of a touch shape and a center position, e.g. by fitting a two-dimensional second-order polynomial or a Gaussian bell shape to the attenuation values, or by finding the ellipse of inertia of the attenuation values. There are also numerous other techniques as is well known in the art, such as clustering algorithms, edge detection algorithms, standard blob detection, water shedding techniques, flood fill techniques, etc. Step 440 results in a collection of peak data, which may include values of position, attenuation, size, and shape for each detected peak. The attenuation value may be calculated from a maximum attenuation value or a weighted sum of attenuation values within the peak shape.
In another embodiment, signal processor 140 is configured to determine an object reference point 250 within the interaction area of large touching object by selecting a point at random within the boundary of the touching object.
In an embodiment in which touching objects are identified using intersections between attenuated or occluded detection lines, i.e. by triangulation, the object reference point is set to the intersection point or average of intersection points, including a weighted average determined in dependence on the attenuation of the detection lines used for computing the intersection points.
In step 450, a region 200 is determined around object 210, 220. The region corresponds to an area of the touch surface at the point of and surrounding an object interacting with the touch surface. In one embodiment, region 200 may be a circular area, centred on object reference point 250 and having radius R. Radius R may be a predetermined length. Alternatively, radius R may be dynamically determined in dependence on properties of the touching object, including the contact area of the touching object, or a pressure exerted by the touching object on the touch surface. Other embodiments are envisioned in which region shapes are alternative shapes, e.g. a rectangular shaped region defined by a width and height and with object reference point 250 at its centre. Similarly, an ellipse may be used, defined by a width and height and with object reference point 250 at its centre.
In step 460, a set of detection lines intersecting region 200 is determined. In an embodiment where region 200 is a circular area, centred on object reference point 250 and having radius R, the set of detection lines intersecting region 200 is determined to be the set of detection lines passing within distance R of the object reference point 250.
In embodiment of step 460 is now described. This embodiment is recognised as one of numerous possible solutions for determining detection lines intersecting region 200.
1) The emitter/detector pairs forming each detection line are analysed in a counterclockwise direction. As shown in
The detector counter is then incremented in counterclockwise direction (i.e. di+1) and the detection line between emitter e0 and the incremented detector di+1 is analysed. This loop continues and the detection lines from the emitter are therefore analysed in a counterclockwise pattern until a detection line is identified that passes sufficiently close to the object reference point 250, i.e. distance 255 is within the specified radii R. In
s=clot product(normal[e0−di], object reference point−detection line position[e0−di])
Where s is the closest distance from a point to a line.
Other search sequences are envisaged including a binary search, or root-finding algorithm, such as secant method or Newton's method.
In embodiments where region 200 is non-circular, other techniques for determining intersection of the region by the detection line may be used. E.g. Ray/Polygon Intersection algorithms as known in the art.
As shown in
For all detection lines D0, the transmission values and reference values are determined. In one embodiment, the reference values are an estimated background transmission value for the detection line without any touching objects present. In an alternative embodiment, reference values can be a transmission value of the detection line recorded at a previous time, e.g. within 500 ms. Alternatively, reference values can be an average of transmission values over a period of time. E.g. within the last 500 ms. Such averaging techniques are described in U.S. Pat. No. 9,377,884.
As shown in
As the emitter/detectors are processed in a circular order, a geometric consequence is that the detection line defined by [ej+1, dk] will be further away from the region 200 than [ej, dk]. Therefore, in a preferable configuration, when detection lines for the next emitter in the counterclockwise direction are analysed, the first detection line to be analysed may be [ej+1, dcw,j] and then continued in a counterclockwise direction. This allows a significant reduction in the number of computations required to determine the set of object boundary lines. As an alternative to selecting the next detection line in the counterclockwise direction, the next detection line to be analysed may be determined using a binary search or a root finding algorithm. As shown in
As shown in
The above steps are repeated for every emitter until every detection line intersecting with region 200 is determined. It is noted that the order in which detection lines are analysed is arbitrary. It is possible to start with fixed emitters or detectors when searching for intersect detection lines.
In step 470 of
Although
From a visual inspection of
In one embodiment, changes in attenuation are on a relatively short time scale, i.e. during the touch down event. Such an attenuation map is described in U.S. Pat. No. 9,377,884.
The first threshold in
The threshold factor may be adjusted in dependence on temporal information of the interactions of the touch system. In one embodiment, where a plurality of styli have been recently identified in an area, the threshold for detecting styli in that area may be reduced to make stylus classification more likely. Where a plurality of fingers have been recently identified in an area, the factor may be increased for determinations made in that area to make finger classification more likely. The factor may also be adjusted to ensure better performance when several proximal touches are detected, due to some detection lines passing more than one object.
In one embodiment, a first threshold is used to find a ratio of detection lines above and below the first threshold. This ratio is small for fingers and higher for pens. In the example results of
For systems where the detection line width is similar to that of the pen, reconstructed peaks of the same attenuation (touches and pens) have different attenuation histograms. Since a finger is generally bigger it will have lower attenuation per detection line (if the reconstructed attenuation is the same) than for a pen (that attenuates fewer detection lines) even though the reconstructed attenuation value may end up at the same level.
In one embodiment, the ratio of attenuated detection lines (whose attenuation is above a threshold) compared to the number of detection lines passing through the radii may be used to determine an object type. E.g. if all detection lines that pass within 5 mm from the touch point are analysed, a finger can be expected to affect almost all of the detection lines (most fingers are larger than 10 mm in diameter). A stylus tip with 2-4 mm contact diameter will only affect around 10-70% of the detection lines depending on the width of the detection line. Consequently, in an embodiment, the object type may be determined to be a finger where the ratio of the number of affected detections vs total intersecting detections exceeds 0.7.
In other embodiments, the statistical measure may comprise the symmetry, skewness, kurtosis, mode, support, head, tail, mean, median, variance or standard deviation of a variable of the set of intersecting detection lines.
In some embodiments, characteristics of the object may be determined in dependence on a plurality of the statistical measures. In one example, object type and an orientation of the object is determined in dependence on the statistical measure of at least the angle of the light path in the plane (shown as φ in
In some embodiments, at least one statistical measure is a multivariate statistical measure of values for a plurality of light path variables of the set of intersecting light paths. E.g. A combination of the median and the skewness of the attenuation values may be used to determine object type. Alternatively, variance and median values may be used to determine object type. In an alternative example, an orientation of the object is determined in dependence on the statistical measure of the angle of the light path in the plane of the touch surface and the transmission value of the light path.
A true centre point of a touch object (as opposed to object reference point 250) can now be found as the solution to the following over-determined set of linear equations, solved using normal equations.
For each of the interacting detection lines 230, a normal vector (having unit length) is determined as well as a position on the respective detection line (which can be the geometrical position of either emitter or detector or some other point).
For all detection lines passing through the region we get one “weighted” equation:
0=attenuation*clot product (normal [ej−di], object reference point−detection line position[ej−di])
Using the attenuation as weight when solving the normal equations eliminates the need to threshold the affected vs unaffected detection lines when computing the centre point in this fashion.
Where normal is the normal vector and detection line position[ej−di] is a position along the detection line. Then, all of the linear equations are solved to determine a centre position.
This technique also allows a centre position to be determined for regular shapes, oblongs, etc.
Geometric characteristics of the object may also be determined in dependence on the one or more statistical measure, including length, width, radii, orientation in the plane of the touch surface, shape.
In one embodiment, all determined detection lines for all emitters are analysed to determine their angle φ (phi), defined as the angle between the normal to the detection line and the touch surface x-axis 400, and the shortest distance from true centre point to the detection line. Given all detection lines passing through the region, a minimum average (over a small phi-region) of attenuation*(shortest distance from detection to true centre point), provides the orientation of an elongated object.
A boundary line may be determined as the detection line with the largest magnitude distance from centre point 250 where the attenuation is above a threshold. The characteristics of the selected boundary line will provide useful information about the characteristics of object 210, 220. First, where the object is substantially rectangular, the length (i.e. the major axis) of the object the may be determined in dependence on a vector defining the shortest distance from the boundary line to the true centre point. As the object is rectangular, the magnitude of the vector may be assumed to be half of the length. Therefore, the length of object may be determined to be twice the magnitude of the vector.
Furthermore, the angle of the vector also defines the orientation angle of the rectangular object. The angle phi of the vector defines the wide axis of the object. Consequently, the angle of the narrow axis of the rectangle may be defined as
we can also use the distance between the boundary line located at
and the true centre point in to determine the width of the object. Similar to above, the width of the object may be determined to be twice the magnitude of the vector of the boundary line located at
In one embodiment, the phi and length values for the object are determined using an average of a plurality of the highest values.
In another embodiment, a touch system is provided includes a touch surface, a display, a touch sensor configured to detect one or more objects touching the touch surface and generate a touch signal, a processing element configured to: determine a position of the one or more objects in dependence on the touch signal, determine whether an object is an eraser in dependence on the touch signal, output a user interface to the display, wherein the user interface is configured to display one or more interaction objects and wherein the user interface is controlled via the one or more objects on the touch surface, wherein an erase function may only be applied to the user interface by means of an object determined to be an eraser. The eraser may have a rectangular surface for application to the touch surface allowing the touch system to easily identify the shape of the eraser, either according to the above techniques or techniques otherwise known to the skilled man. In a class room environment where a teacher and children are interacting with a digital white board and where erasing objects on the digital whiteboard is only permitted by means of the physical eraser, it is surprisingly difficult for a child to accidentally or deliberately simulate the shape of a rectangular eraser on the touch surface using their fingers and hands. Therefore, it is advantageous possible to prevent a child from erasing objects (e.g. ink, text, or geometric shapes) on the digital white board without using the eraser object. i.e. without the teacher's authorization.
In the embodiment above, the user interface may be a canvas or whiteboard application. Furthermore, the one or more interaction objects may comprise ink, text, or geometric shapes. The one or more interaction objects may be added to the user interface by means of a non-eraser object type applied to the touch surface. The erase function may remove interaction objects from the user interface at a position on the user interface corresponding to the position of the eraser on the touch surface.
Number | Date | Country | Kind |
---|---|---|---|
1730073-2 | Mar 2017 | SE | national |
1730120-1 | Apr 2017 | SE | national |
17172910.6 | May 2017 | EP | regional |
1730276-1 | Oct 2017 | SE | national |