The disclosure relates to a method for determining a touch location on a capacitive touch panel, and to a touch panel module adapted to determine a touch location.
Capacitive touch panel devices are widely used to allow user interaction with electronic devices. In particular, a transparent touch panel can be used on top of a display device to allow a user to interact with the electronic device via a graphical user interface presented on the display device. Such touch panels are used in for example mobile phones, tablet computers, and other portable devices.
A known touch panel for use with such devices comprises a glass plate provided with a first electrode comprising a plurality of first sensing elements on one face of the glass plate, and a second electrode on an opposite face of the glass plate. The core operating principle is that the touch panel is provided with means for determining (changes in) the capacity between any of the first sensing elements of the first electrode and the second electrode. Such change in capacitance is attributed to a touch event, sometimes also called a gesture or touch gesture. By determining the location of the sensing element where the change in capacitance is maximized, the central location of the touch event is determined.
In coplanar touch panels the sensors are located in one single (Indium Tin Oxide, ITO) layer and each sensor has its own sense circuitry. Coplanar touch technology uses differential capacitance measurements in combination with a coplanar touch sensor panel. The sense circuit measures the charge that is required to load the intrinsic capacitance of each individual sensor and in addition (if applicable) the finger-touch-capacitance for those sensors that are covered/activated by the touch event. The intrinsic capacitance of the sensor depends on the sensor area, distance to a reference (voltage) layer and the dielectric constant of the materials between sensor and this reference layer. Assuming that the intrinsic capacitance is stable and constant over time, this is accounted for during the tuning/calibration procedure. The variation of sensor capacitance due to a touch event will then be the discriminating factor revealing where the touch is located.
The accuracy performance of a touch panel is the most important characteristic of the functionality of a touch panel as it shows the capability of recognizing a touch event on the same location as the actual spot location of the physical touch. Next to this, a high accuracy will improve the ability of determining the shape and size of the touch event. Moreover, a high spatial accuracy performance of a touch display will enable to correctly recognize stylus input (i.e. touches with a relative small impact diameter <4 mm).
In general, the accuracy of a touch panel with a fixed size will increase by enlarging the sensor density i.e. the total number of active touch sensors per display area. With a larger sensor density per area, not only the location, but also the shape and size of the touch can be detected with more accuracy. For a typical touch application of a pixelated display panel, (in which as a response of the touch event, part of the display will be activated/selected), the ultimate touch sensor dimension will be equal to the display pixel sensor or in other words: the maximum accuracy can be achieved when the touch sensor density is equal to the Pixels-Per-Inch (PPI) value of the display.
For various reasons, such as costs, design and process capability (track/gap capabilities) and display form factor (e.g. availability for track/routing layout) the number of I/O lines of the touch driver/controller will be limited. Consequently, the number of touch sensors of a touch panel of a display module will, in general, be much smaller than the actual number of display pixels which will have its negative impact on the achievable accuracy. Normally, for stylus input (i.e. with only a small area touching the surface, <4 mm diameter), a relatively higher accuracy is requested than for a finger input (with larger area touching the touch panel, i.e. 9 mm diameter). This is because a stylus input is related to typical touch display functionalities such as line drawing and hand-writing which requires a small spatial input (and recognition).
In this formula, vector Pi represents the center location [xi,yi] of the ith sensor. The calculated location [x, y] is thus a weighted average of the center locations [xi,yi], wherein the sensor counts are the weights. In the present example, the location indicated by 20 in
The centroid method thus gives an [x, y] location that has a theoretically higher resolution than the resolution of the sensor grid. However, the centroid method only gives an approximation of the true touch location. The direction and magnitude of the error varies depending on the true location. For example, if the sensor 10 is touched exactly in the middle, the centroid method will give an exact result. If the true touch location is off-center, there is a varying error.
This varying error is particularly evident when the user tracks or draws a straight line across the sensor panel, as illustrated in lines a through e of
It is an object of the disclosure to provide a method and apparatus for determining a touch location that reduces this wobble effect.
The disclosure provides a method for determining a touch location on a touch panel comprising a plurality of sensors, the method comprising obtaining a first estimate for the touch location, determining a correction vector by applying at least one predetermined mapping, using the first estimate as input for said mapping, and combining the first estimate and the correction vector to obtain corrected location values.
The first estimate may advantageously be a low-complexity method, such as weighted average or centroid method. The mapping is pre-determined to map results of the first estimate to a correction vector, so that the combination of a the first estimate vector and the correction vector yields a close approximation of the true touch location. Thereby, the “wobble error” of the estimation is effectively reduced or removed altogether. The pre-determined mapping may be dependent on the detected touch spot size, that is, different mappings are used for smaller or larger touching objects (e.g. stylus point, fingertip, etc).
Here a mapping is understood to be any function that takes a number of input variables (e.g. one or more coordinate components corresponding to a touch location) and outputs one or more variables (e.g. one or more components of a correction vector) depending on the input variables. A mapping can be implemented in many different ways. To name but a few: it can implemented in hardware, in software, or a combination of both. The mapping can be numerically evaluated or approximated by means of a polynomial approximation, a series expansion, a Fourier series, a function fitted to empirical data, or by a (interpolated) lookup table comprising empirical or modeled data. According to an embodiment of the disclosure, the mapping can be implemented as a two-dimensional mapping, taking an two-dimensional estimate vector as input and yielding a two-dimensional correction vector. The two-dimensional mapping can be implemented as a two-dimensional lookup table (LUT). The mapping could also take three input variables, where the third variable is the touch spot size, and yield two correction vector components as output variables dependent on the input estimation components and the spot size.
The mapping can also be implemented as a combination of two one-dimensional mappings, where a first one-dimensional mapping takes a first component of the estimate vector as input yielding a first component of the correction vector, and a second one-dimensional mapping takes a second component of the estimate vector as input yielding a second component of the correction vector. The one-dimensional mappings may be implemented as one-dimensional lookup tables (LUTs). The mapping could also take two input variables, one estimation component and the touch spot size, and return a correction vector component dependent on the estimation component and the spot size.
The disclosure also provides a location determination module arranged to perform the above described method. To that end, the module may comprise an estimator unit for generating a first location estimate. The module may comprise a processor for controlling the units and performing calculations. The module may comprise one or more evaluation units implementing the above described mappings.
The disclosure also provides a touch sensor system comprising a touch sensor panel having a plurality of sensors and a touch location determination module as described above. The module may be arranged to receive touch sensor measurement values from the touch sensor panel.
The disclosure further provides a computer program product storing a computer program adapted to, when run on a processor, perform a method as described above.
The disclosure will be further explained in reference to figures, wherein
a-2c schematically show cross section of touch panel device variants according an embodiment of the disclosure;
a and 4b schematically illustrate the wobble effect
a-5e schematically illustrate a method for determining a touch location according to an embodiment of the disclosure for various forms of sensors;
a-6b schematically illustrate correction functions used in a method according the disclosure;
a-7b schematically illustrate a method for determining a touch location according to an embodiment of the disclosure;
First, coplanar touch panels will be described in some more detail.
The touch panel surface is divided in a number of touch sensors 10. In the example of
The touch panel surface is typically protected by a glass cover layer. For electronics devices comprising a display 16, the display is typically provided underneath the touch panel surface, however also variants exist in which display and touch panel layers are intermixed or shared. More details of the layers will be disclosed in reference to
a schematically shows a cross section of a so-called “discrete co-planar touch” touch panel, while
In
Beneath the cover window, sub-layer 4 is present. This layer can for example comprise an anti-splinter film to prevent the cover layer from falling apart into separate sharp pieces when broken. Sub-layer 4 can also be a polarizer layer, for example to work with display layer 16. Sub-layer 4 can also be formed of optical clear adhesive or simply an airgap (with double sided adhesive at the edges of the sensor).
Beneath sub-layer 4, the sensor layer 8 is located. This layer comprises separate touch sensing elements 18. The sensing elements 18 are provided on a substrate layer 6. Underneath the substrate layer 6 reference electrode layer 12 may be provided. Reference electrode layer 12 can provide a reference voltage. The touch sensing elements 18 can comprise Indium Tin Oxide (ITO), which is a suitable material for transparent sensors and tracks.
Beneath the substrate 6 to which the sensor layer 8 and reference electrode layer 12 are attached, another sub-layer 14 may be provided. This layer could again be an airgap, polarizer, adhesive layer, etc.
Below the sub-layer 14, the display layers 16 are provided. Such a display can for example be a Liquid Crystal Display (LCD) or organic light-emitting diode (OLED) display.
Instead of providing reference electrode layer 12 underneath the substrate 6, the reference voltage layer 12 may also be provided in other places of the stack, for example as a layer 12′ on top of the display 16 or as a layer 12″ inside the display stack 16. The function of the reference voltage layer 12, 12′, 12″ will be disclosed in reference to
As mentioned above, the display layer 16 may be absent, in which case the substrate 6 with reference electrode layer 12 and sensor layer 8, together with cover layer 2 forms a touch panel device, for example for use in mouse pads or graphics tablets.
b shows an alternative variant to the above described “discrete co-planar touch variant”, the “on-cell co-planar touch”. The main difference is that the sensor layer 8 comprising the touch sensing elements 18 is not provided on a separate substrate layer 6, but rather on the display layer 16. This saves an additional layer, and helps to reduce the size and production costs of the touch-panel display. In this case, the reference voltage layer is a layer 12″ in the display stack 16.
c shows a further variant, the “window integrated co-planar touch” variant. Reference is made to published US patent application 2010/0 097 344 A1 by the same applicant which details several embodiments of this variant. Again the separate substrate layer 6 is absent, and the sensor layer 8 is provided on one of the sub-layers 4, 14. The sub-layer 4 is not required—the sensing elements 18 of the sensor layer 8 could also be provided directly on the cover layer 2 (see for example
It is noted that the above described exemplary touch panels comprise capacitive touch sensors. However, the disclosure is not limited to capacitive sensors. The disclosure may be applied to any local surface-integrating sensor, such as for example photosensitive touch sensors.
The basic centroid method, illustrated in
a schematically shows a part of a touch sensor panel comprising sensors 10a having a diamond shape. The shown x- and y-axes are aligned with respective sides of the touch panel module. That is, location [x,y]=[0,0] corresponds with the bottom left corner. Also shown are axes u and v, which form the [u,v] coordinate system. The u and v axes are aligned with sides of the sensors 10. Moreover, the coordinates are normalized, so that sensor 10a boundaries correspond to lines where u or v has an integer value (see the illustrated lines u=0, u=1, v=0, etc).
Using the centroid method, or any other approximate method, a first estimate of the touch location 20 can be determined. If the centroid method is used, the first estimate can be calculated in the [x, y] coordinate system (as in equation (1)) and then be transformed to the corresponding [u, v] coordinates via an affine transformation determined by the pre-determined lay-out of the sensors 10a in the grid. Alternatively the centroid method can be adapted to calculate in the first estimate in [u, v] coordinates directly by expressing the sensor center locations Pi in [u, v] coordinates.
The first estimate can then be split into an integer part [ui, vi] and a fractional part [uf, vf]. Since the [u, v] coordinates are normalized and aligned with the grid, the integer part [ui, vi] will point to a corner of the cell in which the estimated location 20 is located. The fraction part [uf, vf]. will point from that corner to the estimated location 20.
The true touch location is indicated by point 21 (the distance between points 20 and 21 is somewhat exaggerated in order to show more clearly the wobble effect). Between points 20 and 21 a correction vector [ucor, vcor] can be drawn, that is [u, v]true=[u, v]est+[ucor, vcor].
The error [uerr, verr]=−[ucor, vcor] in the estimate is dependent on the relative location of the true location 21 with respect to the sensor 10a center. In other words, a function Eerr(uf, vf) exists which will, for a given [uf, vf]true coordinate, give the resulting estimate error [uerr, verr]. The reverse of this function Ecor(uf, vf) can then be used to map a given estimate [uf, vf]est to the [ucor, vcor]=−[uerr, verr] value.
While the Ecor(uf, vf) may be derived analytically from first principles, it may be more efficient to determine the function empirically using for example a robot to systematically touch a panel in pre-determined “true” locations and analyzing the resulting estimated locations. In that manner, a two-dimensional (lookup) table (LUT) may be formed that provides the needed mapping from [uf, vf]est to [ucor, vcor]
It is not necessary according to the disclosure to perform the calculations in the [u, v] coordinate system. It is also possible to perform the calculations and to generate the two-dimensional mapping in the [x, y] coordinates or any other coordinate system.
An advantage of the [u, v] coordinate system, or any coordinate system in which the axes are aligned with the borders of the sensors 10a-10e, is that the function is, to a high degree of accuracy, separable. That is, the needed correction in the u direction, ucor is only dependent on uf, and the correction vcor in the v direction depends on vf. Instead of using a two-dimensional mapping, two separated one-dimensional mappings may be used, ucor=Ecor,u(uf) and vcor=Ecor,v(vf).
If the sides of the sensors all have equal length (e.g. sensors 10a, 10b, and 10c in
b-5e illustrate some other sensor arrangements that may be used in combination with the method as explained above.
a and 6b show exemplary graphs 60, 61 with values 62, 63 for the Ecor,u(uf) and Ecor,v(vf) mappings respectively. The x axis is indexed, that is in
There are many ways in which a skilled person may implement an evaluation means for evaluating the one-dimensional mappings illustrated in
When the symmetry of the sensors allows it (as is the case in the example sensor geometries shown in
The inventor has noted that the needed correction is generally dependent on the size A of the part of the touching object that makes contact with the touch panel (hereafter: the touch spot size A). It may therefore be advantageous to provide a plurality of mappings Ecor,i for various pre-determined touch spot sizes Ai. For example, if Ecor mappings are made for spot sizes i=1, 4, and 9 mm2, and a touch panel is touched by a object with spot size 6, the table for i=4 may be used (closest) or an interpolated value of the results using mappings Ecor,Ai=4 and Ecor,Ai=9 may be used.
a illustrate an embodiments of a method 70 according to the disclosure. First, a[u,v[est estimate is determined 71, which is separated into an integer part [ui, vi] and a fractional part [uf, vf] in action 72. In action 73, the spot size A is determined. This spot size may for example be estimated from the total sensor measurement, that is
In action 74, a two-dimensional mapping is evaluated to obtain correction vector [ucor, vcor]. Then in action 75 the corrected touch location [u, v]cor is calculated from u=ui+uf+ucor and v=vi+vf+vcor. Finally, the [u, v] values are transformed to the [x, y] coordinate system. For example, the [x, y] axes may be aligned with the sensor module boundaries and normalized so that an increment by one corresponds to a pixel increment.
b illustrates a further method 80 according to the disclosure. Actions 81, 82 correspond to actions 71, 72 in
The processor then sends the uf, vf values to first evaluation means 93 and 94 respectively. Evaluation means 93 is arranged to calculate mapping value Ecor,u(uf). The processor may also send the spot size to evaluation means 93, so that evaluation means 93 can select a suitable mapping, as outlined above. Alternatively, the processor means may implement a correction, for example interpolation as outline above, based on the results of one or more calculated mappings by evaluation means 93. Likewise, evaluation means 94 is arranged to calculate Ecor,v(vf). Finally, the processor 92 calculates the corrected [u, v] values after which transformation unit 95 transforms the corrected [u, v] values into [x, y] coordinates.
It is observed that, in the above specification, at several locations reference is made to “evaluation means” or “processors”. It is to be understood that such evaluation means/processors may be designed in any desired technology, i.e. analogue or digital or a combination of both. A suitable implementation would be a software controlled processor where such software is stored in a suitable memory present in the touch panel device and connected to the processor/controller. The memory may be arranged as any known suitable form of RAM (random access memory) or ROM (read only memory), where such ROM may be any form of erasable ROM such as EEPROM (electrically erasable ROM). Parts of the software may be embedded. Parts of the software may be stored such as to be updatable e.g. wirelessly as controlled by a server transmitting updates regularly over the air.
The computer program product according the disclosure can comprise a a portable computer medium such as an optical or magnetic disc, solid state memory, a harddisk, etc. It can also comprise or be part of a server arranged to distribute software (applications) implementing parts of the disclosure to devices having a suitable touch panel for execution on a processor of said device.
It is to be understood that the disclosure is limited by the annexed claims and its technical equivalents only. In this document and in its claims, the verb “to comprise” and its conjugations are used in their non-limiting sense to mean that items following the word are included, without excluding items not specifically mentioned. In addition, reference to an element by the indefinite article “a” or “an” does not exclude the possibility that more than one of the element is present, unless the context clearly requires that there be one and only one of the elements. The indefinite article “a” or “an” thus usually means “at least one”.